Microservices vs Monoliths
Designing Modular Monoliths for Better Code Separation
Learn how to organize internal logic into distinct modules within a single deployment unit to achieve maintainability without the complexity of a distributed network.
In this article
The False Choice of Distributed Systems
Modern software engineering often presents a binary choice between a messy monolith and a complex network of microservices. Many teams jump into microservices prematurely because they believe it is the only way to scale their applications effectively. This decision often leads to the creation of a distributed monolith where services are physically separated but logically coupled.
A distributed monolith inherits the weaknesses of both architectures without providing the benefits of either. You face the deployment challenges of multiple services while still dealing with tight coupling that prevents independent scaling. The result is often a system that is harder to debug and slower to develop than a single unified codebase.
The primary reason for this struggle is the premature introduction of the network. Every time a function call is replaced by a network request, you introduce latency, partial failure modes, and serialization overhead. Managing these concerns requires significant infrastructure investment that small to medium-sized teams may not be ready to sustain.
Choosing a modular monolith allows you to avoid these distributed system traps while your product is still evolving. It provides a way to maintain high architectural standards and clear boundaries within a single deployment unit. This approach prioritizes logical organization over physical distribution, ensuring that your code remains maintainable as the complexity of the domain grows.
The first rule of distributed objects is: Don't distribute your objects. Many systems are better served by strong logical boundaries within a single process than by physical separation across a network.
The goal of a modular monolith is to achieve high cohesion and low coupling through strict internal boundaries. Each module should own its data and business logic, exposing only a narrow interface to the rest of the system. This setup provides the flexibility to split modules into independent services later if the need for independent scaling truly arises.
Identifying the Network Tax
Every network jump between services adds a layer of complexity known as the network tax. This tax includes the time spent serializing data into JSON or Protobuf and the time the packet spends traveling over the wire. In a high-traffic system, these milliseconds accumulate and can significantly degrade the user experience.
Beyond performance, the network tax includes the cognitive load of handling failure. When a local function call fails, the entire process usually stops or throws an exception you can catch. When a remote service fails, you must implement retries, circuit breakers, and timeouts to keep the system stable.
Monitoring also becomes more difficult once you move to a distributed model. Tracing a single user request across five different services requires specialized tooling like OpenTelemetry and distributed tracing platforms. In a modular monolith, you can often follow the execution path using standard debugging tools and simple structured logs.
The Myth of Independent Scaling
Teams often cite independent scaling as the main driver for microservices. They argue that if the payment processing logic is under heavy load, it should be scaled separately from the product catalog. While this is true in theory, many applications do not reach the level of traffic where this physical separation becomes a necessity.
In many cases, scaling the entire monolith vertically or horizontally is more cost-effective than managing individual service clusters. Modern cloud providers offer powerful instances that can handle massive throughput for a single process. Scaling a monolith simply involves spinning up more replicas of the same container behind a load balancer.
Furthermore, independent scaling is only possible if the data is also decoupled. If all your microservices still point to a single central database, that database will become the bottleneck regardless of how many service instances you run. True scaling requires architectural changes that are often easier to prototype and refine within a modular monolith.
Designing High-Integrity Modules
A successful modular monolith depends on how you define the boundaries between different parts of the application. Instead of organizing code by technical layers like controllers or services, you should organize it by business domains. This approach, rooted in Domain-Driven Design, ensures that all logic related to a specific feature stays together.
Each module should be treated as a black box with a well-defined public API. Internal implementation details, such as database schemas or private helper classes, should never be accessible from outside the module. This encapsulation allows you to change how a module works internally without breaking other parts of the system.
Dependency management is the most critical aspect of maintaining a modular structure. You must strictly control which modules are allowed to depend on each other to avoid circular dependencies. A common strategy is to have a core module that contains shared entities while keeping feature modules isolated from one another.
1// src/orders/internal/order-service.ts
2// This class is private to the orders module
3class OrderProcessor {
4 calculateTotal(items: Item[]): number {
5 return items.reduce((sum, item) => sum + item.price, 0);
6 }
7}
8
9// src/orders/public-api.ts
10// This is the only way other modules interact with orders
11export interface OrdersModule {
12 createOrder(userId: string, items: Item[]): Promise<Order>;
13 getOrderStatus(orderId: string): Promise<Status>;
14}
15
16export const Orders: OrdersModule = {
17 createOrder: async (userId, items) => {
18 const processor = new OrderProcessor();
19 // Implementation details are hidden here
20 return { id: "123", total: processor.calculateTotal(items) };
21 },
22 getOrderStatus: async (id) => ({ state: "pending" })
23};In the example above, the OrderProcessor class is kept internal to the module folder. Other parts of the application, such as the shipping or billing modules, can only interact with the Orders functionality through the exported interface. This prevents the shipping logic from accidentally reaching into the internal calculation logic of the orders system.
Data ownership is another pillar of modularity. Ideally, each module should own its own tables or collections within the database. While they share the same physical database instance, they should not perform cross-module joins. If the shipping module needs order data, it should request it through the order module API rather than querying the orders table directly.
Encapsulation Strategies
Languages like Java and C# provide access modifiers like internal or package-private to help enforce boundaries. In languages like TypeScript or Python, these boundaries are often enforced through folder structures and linting rules. Regardless of the language, the goal is to make it difficult for a developer to bypass the established module API.
You can use tools like Nx or custom ESLint rules to prevent unauthorized imports. For example, a rule could state that files in the catalog module can never import files from the checkout module. This automated enforcement ensures that the architecture does not degrade as more developers join the project.
Another effective strategy is to use the Ports and Adapters pattern within each module. This separates the core business logic from the external infrastructure like databases or third-party APIs. By doing this, you make the module highly testable and ready for future moves to a different environment.
Internal Communication Patterns
Even with strong boundaries, modules still need to communicate to complete business processes. There are two primary ways to handle this within a monolith: direct method calls and internal event dispatching. Choosing the right pattern depends on whether the interaction needs to be synchronous or asynchronous.
Direct method calls are simple and provide immediate feedback. Use them when a module cannot proceed without a response from another module. For instance, the checkout module must call the inventory module to verify stock before finalizing a purchase. Since this happens within the same process, there is no network latency to worry about.
Internal events are better for side effects that do not need to happen immediately. When an order is placed, the system might need to send an email, update a loyalty program, and notify the warehouse. Instead of the order module calling all these services directly, it can publish an OrderPlaced event to an internal dispatcher.
1// Simple internal mediator for decoupled communication
2import { EventEmitter } from "events";
3
4const internalBus = new EventEmitter();
5
6// The Shipping module listens for events
7internalBus.on("ORDER_PAID", (payload) => {
8 console.log("Preparing shipment for order:", payload.orderId);
9 // Shipping logic goes here
10});
11
12// The Payments module emits events
13async function processPayment(orderId: string) {
14 // Perform payment logic...
15 const success = true;
16
17 if (success) {
18 // Notify other modules without knowing who they are
19 internalBus.emit("ORDER_PAID", { orderId, timestamp: Date.now() });
20 }
21}Using an internal bus like the one shown above decouples the sender from the receivers. The payment module does not need to know that a shipping module even exists. This mimics the behavior of a message broker like RabbitMQ or Kafka but stays entirely within the application's memory space.
- In-memory calls are significantly faster than network-based RPC or REST calls.
- Synchronous calls provide strong consistency but increase temporal coupling.
- Asynchronous events allow for eventual consistency and better system resilience.
- Internal events make it easier to split modules into microservices later by replacing the local bus with a message broker.
Transactional Consistency
One major advantage of the modular monolith is the ability to use local database transactions across modules. In a microservices environment, ensuring consistency across two services requires complex patterns like Sagas or Two-Phase Commit. In a monolith, you can simply wrap the calls in a single database transaction.
However, you should use this power sparingly. Over-reliance on cross-module transactions creates tight coupling at the database level. Try to design your business processes so that each module can commit its own changes independently whenever possible.
If you do use cross-module transactions, ensure they are managed by an orchestrator or a high-level service. The individual modules should focus on their own state, and the orchestrator should ensure that both the order is created and the payment is recorded successfully. This keeps the transaction logic out of the core domain modules.
Strategic Evolution and Scaling
A modular monolith is not just a destination; it is a strategic starting point that keeps your options open. As your application grows, you may find that one specific module has very different resource requirements than the rest. Perhaps your image processing module requires heavy CPU usage while the rest of the app is I/O bound.
When this happens, you can extract that single module into a separate microservice. Because you have already established clear boundaries and interfaces, the extraction process is relatively straightforward. You move the code to a new repository, change the internal calls to network calls, and deploy it independently.
The modular monolith allows you to delay this move until you have the data to justify it. You can observe the performance characteristics of each module in production using standard profiling tools. This data-driven approach prevents you from over-engineering parts of the system that don't actually need the complexity of microservices.
Testing also becomes much simpler in this architecture. You can write unit tests for each module in isolation and integration tests that run the entire system without needing to spin up dozens of containers. This leads to a faster feedback loop for developers and more reliable releases.
Finally, the modular monolith supports a smaller operations team. Since you are managing fewer deployment units, you have less overhead in terms of CI/CD pipelines, security patching, and infrastructure monitoring. This allows your engineers to spend more time building features and less time managing the platform.
When to Actually Extract
You should only consider moving a module to its own service when it provides a clear benefit that outweighs the network tax. Common reasons include the need for a different technology stack, independent scaling requirements, or separate team ownership. If a specific feature needs a different database or a different programming language, extraction is the logical step.
Another valid reason is team size. If your engineering organization grows to hundreds of people, the friction of everyone working in the same codebase can become an issue. At that scale, the physical separation of microservices helps define team boundaries and reduces deployment contention.
Before extracting, ensure your internal module is truly decoupled. Check for shared database tables and circular dependencies. If you cannot run the module's tests without loading the entire rest of the application, it is not yet ready to be its own microservice.
