How to Migrate to Clean Code Architecture

Jorge Collado García, Software Engineer, gives a quick overview on how to migrate and restructure your current project to the Clean Code paradigm.

The Clean Code is an architectural pattern based on arranging inputs and outputs at the edges of our design. Business logic should not depend on whether we expose a REST or a GraphQL API, and it should not rely on where we get data from — a database, a microservice API, or just a simple CSV file.

The pattern allows us to isolate the core logic of our application from outside concerns. Having our core logic isolated means we can easily change data source details without significant code rewrites to the codebase.

The hexagonal architecture is based on three principles and techniques:

  • Explicitly separate User-Side, Business Logic, and Server-Side.
  • User-Side and Server-Side components use the Business Logic component as a dependency.
  • We isolate the boundaries by using Ports and Adapters.

💡 This pattern is focused on separating the domain code from the application and infrastructure code just like Domain-Driven Design (DDD).

Following these lines, there is a small guide in case your team is interested on migrate to this paradigm.

Core Concepts: make the domain be with you

The innermost circle is the Business Logic Layer, where domain objects (entities) and Use Cases are defined; after that, we have the interface adapters or ports.

  • Entities or domain objects. They do not know where they’re stored.
  • Ports are the interfaces to getting entities and working with them. They have a list of methods used to communicate with data sources.
  • Use cases are classes that orchestrate and perform entities' actions. They implement business rules and validations for specific logic of domain actions.


With these three main concepts, the business logic is defined without any knowledge or care of where the data is kept and how business logic is triggered.

The outer circle is reserved for the interface adapter implementations, which includes the Data Source and Transport Layer concepts:

  • Data Sources or adapters to different storage implementations.
    💡 A data source might be an adapter to a SQL database, an elastic search adapter, REST API, or even an adapter to something simple such as a CSV file or a Hash.
  • Transport Layer or input layer can trigger an interactor to perform business logic. The most common transport layer for microservices is the HTTP API and controllers that handle requests.
    💡 By having business logic extracted into interactors, those methods are not coupled to a particular transport layer or controller implementation; they can be triggered not only by a controller but also by an event, a cron job, or from the command line.

Migration: how to start working on

Following these lines, there is a small guide about the main migration requirements. First of all, you will have to think, add and configure some prerequisites for the project skeleton depending on which language you will use. Apart from this, the project structure must follow the concepts described above:

  • core - where the use cases and entities will be stored.
  • adapter - where the data sources and transport layers will be implemented. This module will import core.
  • application - this module is in charge of storing the project’s main class and integration tests. This module imports adapter.

💡 If you come from an MVC configuration, you will have to split the controllers into parts: business logic and HTTP interactions. Then, move the business logic to the Core module and store those HTTP interactions and models in the Adapter module.

Core Module

The core module will store all of the business logic the current project has. Also, along with that logic, all the domain objects will be moved here.

Here, the important challenge is creating interfaces to call the data sources. The use cases will call the interfaces or ports instead of directly to the adapter module.

Following this step, the code will be totally disengaged from the current data source or transport layer.

Adapter Module

The adapter module will contain all service inputs/outputs. Here, there will be the interface/port implementations - totally abstracted from the core module - and along with the transport layer.

The first step is to create as many packages as different inputs/outputs the project has, and then, distribute all adapter-related methods to their corresponding package.

💡 Inputs/outputs modules can be databases, HTTP endpoints, Kafka message exchangers, file systems, etc.

Application Module

The most important module is called application. This module will be the smallest one because you will only need to move here the main class and all integration tests.

💡 Integration testing is a type of software testing in which the different modules or components are tested as a combined entity.

Final Thoughts — why to migrate

Now that you know the general pattern, you can see how easy is to migrate from your old configuration. The pattern is not new and there are many articles about it; furthermore, it seems that the community is happy with it and migrating to this format. Coding in the Clean Code manner has so many advantages, such as the followings:


💡 Everything out from the Adapter layer can be easily tested, just pure logic, no IO operations, no frameworks that need to be up and running, no UI. Automation can handle the rest.


💡Any 3rd party library or even the entire framework can easily be changed since these are in the outermost volatile layer. This also means that while building the app, it’s unnecessary to decide which libraries will be used or maybe what kind of storage.

💡Developers unfamiliar with the codebase can easily find and understand what’s going on—these results from structuring based on use cases instead of the underlying frameworks.

Related articles:

Christian Mülhaupt, Head of Architecture, explains how we leverage the event-driven architecture paradigm and use Kafka in conjunction with our business services.