Hexagonal Architecture: Separating Logic and Technical Codebase
It is a design pattern whose main objective is to separate concerns about the business logic code base and the technical code base. This architecture allows the creation of an abstraction layer where all the input and outputs, and auxiliary functions, are separated from the core domain logic.
A hexagon shape represents this architectural pattern, with a business component in the core surrounded by the services on the six sides. This pattern is mainly used for application architectures and allows the creation of “ports” or “adapters“for the business domain in the core.
These services are integrated with the core domain by using a defined interface. The design of these interfaces should be technology agnostic, and they should not constrain any language. So, it is possible to evolve the services without affecting the core.
The overall idea is to have composable capabilities that can be reused. In addition, in most cases, these capabilities have become deployable units that can be utilized within business domain microservices, applications, or just stand-alone capabilities. These capabilities can take the shape of Libraries, Services, or sidecars. Nowadays, modern trends try to formalize these patterns as frameworks. The most known are Dapr and CloudState.
This architecture allows late binding, postponing the implementation or deployment decisions to later stages of the development cycle.
In terms of advantages and disadvantages, Hexagonal Architecture aims to increase the codebase’s maintainability by decoupling the core code from the rest, the packaging strategy, and the isolation of testing functionality. Regarding disadvantages, it adds some complexity to the build and debugging time; it may add latency because of the extra hops between the added abstraction layers.
The services implemented in Hexagonal Architecture can be classified into three categories. Microsoft just started providing container services with built-in Dapr capabilities. However, only some of the hexagonal architecture capabilities proposed in this document are covered by Dapr nowadays.
- Domain Services: They are components responsible for implementing the business domain logic. They follow the Domain-Driven Design (DDD) taxonomy. e.g., “Aggregated root,” “Value Object,” etc.
- Application Services: They are components responsible for orchestrating the execution of domain logic. They conform to a layer with a defined interface that interacts with the business domain tailored per scenario.
- Framework Services: Framework or Infrastructure Services are underlying components that contain the technology needed to run the Application.
The hexagonal architecture is not an exclusive design for microservices. The hexagonal architecture is an architectural pattern that Alistair Cockburn first detailed in 2005. The deployment options for designing these pluggable capabilities were not advanced; therefore, using shared libraries was the only option for implementing these. It is difficult to determine if this was a predecessor of the microservices architecture.
The microservices architecture became mainstream, and at the same time, the runtime deployments evolved. Kubernetes allows having a microservice with multiple runtimes. One is dedicated exclusively to the business domain or logic, and the other is for utility services as required.
From the viewpoint of Hexagonal Architecture as a business enterprise, this application taxonomy opens the possibility of creating the most optimal reusable components ever designed up to this time.
On one side, these util capabilities can be designed in different languages, runtimes, and technology options. They are completely isolated from the business microservice and can evolve independently from any other. In addition, good versioning and deployment practices can be deployed and recycled without affecting the runtime of different components.
These components provide a function through a standard interface; therefore, the development of business capabilities should not interfere with the many architecture options of a project; these technology decisions can be delayed until later stages. Moreover, even the deployment options can be delayed until later stages. This design allows several alternatives for deploying these utility services, and these can be:
- Embedded shared libraries in the business microservices runtime.
- A business microservices sidecar.
- Complete independent components.
In conclusion, investing in building components using this application architecture taxonomy opens a new economy of scale for software development, where the pieces, if they are well designed, could be reused on a large scale. In the microservice architecture that comprises a business domain and other technical capabilities, it is possible to separate the critical business logic from different codebases using the hexagonal architecture principles.
A business domain defines the area of operation in an application or system. Domain-Driven Design is a sphere of knowledge, influence, or activity that can be conceptualized as a subject area to which the user applies for a program as the software domain. A DDD domain is a combination of the areas of knowledge (knowing what will happen with some data or events and typically the section in which you or your Application have the primary business perspective), influence (like impacting your business with your actions or activities). Activity (Do specific tasks necessary with your knowledge and with whom you cause effect in your business area). The technical capabilities that could be considered in the hexagonal architecture are the following:
Caching Services: It is a component that provides the necessary methods for lifecycle content in a caching database. This database is faster than on-disk databases because they keep the content in memory, increasing throughput and lowering data retrieval latency.
Connectivity Services: It is a component that implements the integration services and other primitive functions and provides these services to the business microservice. It abstracts protocol specifics, technical connectivity and retries, and message data formats.
Document Storage Manager (DMS): It is a module responsible for managing all types of images, documents, and other types of structured and semi-structured collateral information collected as part of the normal operations of the Application.
Entitlement Manager: It is a component that is responsible for granting, resolving, enforcing, revoking, and administering application and services fine-grained access entitlements (also referred to as “authorizations,” “privileges,” “access rights,” “permissions”, and “rules”).
Error Handler: Error handling refers to the anticipation, detection, and resolution of application errors, programming errors, or communication errors. It covers response and recovery procedures from error conditions present in a software application.
Event Streaming: It refers to the technology that makes it possible to transform discreet input and output data units into a continuous stream. They can be consumed by one or many event topics, producing an event join condition. It only processes the delta records from the last run.
Integration Services: A component that will centralize the integration with External Parties. It will contain all the data required to be autonomous and consist of sub-modules specializing in different protocols and flows. e.g., Events, HTTP, Files, Incoming, Outgoing, Batch, Synchronous, Asynchronous, etc.
Integration Services – Event Broker: It is a module part of Integration Services and pushes data from the central system to External Systems. It allows capturing data changes on internal events and invokes call-back-URLs from External Systems.
Integration Services – ETL (Extract Transform and Load): The ETL is part of Integration Services. It retrieves (Extracts or Pulls) data sets from the External Systems, Transforms them, and Loads them into a central system. These External Systems should have been registered in our Integration Portal.
Logging services: Each service generates its log trail in a distributed system. There could be transactions with problems that may include more than one service. Therefore, it is required to have a solution for standardizing the lifecycle of logs, consisting of the logging format, local capture, shipping of the records, aggregating logs from each service instance, and facilitating the analysis of records. It is based on commodity technology: Fluentd, Open Search (a.k.a. Elastic Search), and Kibana Dashboards.
Object-relational mapping (ORM) is a design technique for converting data between an application and the database engine using an object-oriented programming paradigm. It provides the effect of a “virtual object database,” which facilitates the development of a business domain and business logic. In addition, it can provide extra benefits when implemented as a library or a sidecar. It abstracts all access and connectivity with the database. It follows the principles of separation of concerns based on domains and decentralized ownership by applying discrete responsibility to self-contained contexts as microservices. When referring to the microservices data, it creates a “data mesh” layer, a concept related to enabling “data as a product.” It conforms to distributed taxonomy and can be used by data analytics consuming “data as a service.”
It is a design pattern that implements a Transactional Outbox, one of the best approaches for solving the “Dual Write” problem. The Dual Write problem for microservices is when they need to maintain their state private to them and then notify that change to a broader audience. So, this pattern aims to solve these two steps to be executed as a single task atomically. It is implemented as a component responsible for tracking the changes in the microservices’ database. In particular, the table is called “Outbox. “ The Outbox table will store documents that will have the changes done in the business schema, and these payloads will be published as Events in the Business Domain Topic.
Publisher & Subscriber is an integration pattern where components called publishers to place events (messages) on queues (or topics). And other system components called Subscribers consume these events. The Publishers and the Subscribers don’t know the existence of each other. The integration pattern allows message filtering, and this is when the Subscribers decide to receive only a subset of the total messages published. There are two common forms of filtering: topic-based and content-based. In a topic-based system, the letters are published to “topics.”
Reference Data Manager
Reference data is data used to classify or categorize other data. Typically, they are static or slowly changing over time. And the Reference Data Manager is a component responsible for managing an application’s reference data and simplifying how this data is shared across other components and systems.
Rule Engine – Business Rule Engine (BRE)
It is specific software that allows defining, analyzing, executing, auditing, and maintaining a wide variety of business logic, collectively called “rules.” It enables business stakeholders to keep updated with the rules using a simplified user interface to configure decision trees, decision tables, etc.
It is a component responsible for the lifecycle management of the user session. The session is created after the user authenticates. For each API invocation, the system will validate if the session is valid before the flow reaches the Application. After some inactivity or expiration time, the session will become invalid.
Workflow Engine – Workflow (microservices-workflow)
It is an end-to-end orchestrator of activities. The components of a system can be a participant in an end-to-end flow organized by a predetermined and scheduled business flow that allows tracking of a sequence of activities from start to end. In the context of microservices, the workflow invokes microservices components, therefore creating an orchestration pattern. Microservices are designed decoupled from one another. So, the orchestration pattern must be carefully planned and adapted to the microservices principles. It converts workflow orchestration into a loosely coupled integration between components. This type of workflow loosely connecting the microservices is a pattern known as “microservices-workflow.”