Server Architecture Patterns: Monolith, Modular, Service-Oriented
The architecture of a server application is a decision about how to organize and deploy code. It shapes your team’s daily work: how features are built, how deployments happen, how failures propagate, and how the system scales.
There are three patterns you’ll encounter, and a spectrum between them. The decision is not “which is best” - it’s which tradeoffs fit your current situation.
The monolith
A monolith is a single deployable unit. All your code - request handling, business logic, data access - ships together as one application.
This is where most successful software starts, and where many successful systems stay. Rails, Django, Spring - these frameworks are built for monolithic applications. The tooling, conventions, and deployment story are all optimized for one unit.
What you get:
- Simple deployment: one artifact, one server (or cluster)
- No network calls between components - internal function calls are cheap
- Atomic transactions across all data without distributed coordination
- One codebase to search, one log stream to read, one process to debug
- Trivial local development: clone and run
What you pay:
- The entire application deploys on every change. A fix to the order processing logic redeploys the authentication code too.
- One team’s work can break another’s. Shared state and shared dependencies cause coupling that’s invisible until it fails.
- Scaling means scaling everything. If the image processing is the bottleneck, you scale the entire application, not just the image processing.
- A slow memory leak anywhere affects everything.
The ceiling on monolith size is real but higher than it’s fashionable to admit. Large monolithic applications work at substantial scale: Shopify, Stack Overflow, Basecamp, and GitHub operated as monoliths for years while handling massive traffic. The problems emerge gradually - slow builds, test suites that take 20 minutes, PRs that conflict constantly - not suddenly.
The modular monolith
A modular monolith is a monolith with internal structure. The application is still a single deployable unit, but the code is organized into well-bounded modules with explicit interfaces between them. One module does not reach into another module’s internals.
src/
orders/
orders.service.ts
orders.repo.ts
orders.api.ts ← public interface
billing/
billing.service.ts
billing.repo.ts
billing.api.ts ← public interface
users/
...
orders.service.ts imports from billing.api.ts, never from billing.service.ts or billing.repo.ts. The boundary is enforced by convention (and optionally by tooling like dependency checking or TypeScript path aliases).
This is often the right intermediate step. You get the deployment and operational simplicity of a monolith, with code structure that reflects your domain. When the time comes to extract a service, the module boundary is already there - you’re converting an import to a network call, not carving up a ball of mud.
The modular monolith is also the pattern that makes “let’s migrate to microservices” feasible. Trying to migrate directly from a tightly coupled monolith to microservices is one of the most common ways that projects fail. Going monolith → modular monolith → microservices is the path that works.
Service-oriented architecture
In a service-oriented architecture (SOA), the application is decomposed into multiple separately deployable services. Each service owns its data and exposes an interface - HTTP/REST, gRPC, or a message queue. Services communicate over the network.
Microservices are SOA taken to an extreme: many small services, each doing one thing, independently deployable.
What you get:
- Independent deployability: the billing team ships without waiting for the orders team
- Independent scaling: scale only what’s under load
- Technology freedom: each service can use the language and database that fits its needs
- Fault isolation: a bug in the recommendations service doesn’t take down checkout
What you pay: This list is longer, and underappreciated by people who’ve read blog posts but haven’t operated distributed systems:
- Every inter-service call can fail, time out, or return unexpected data. You need retries, circuit breakers, timeouts, and fallbacks - everywhere.
- Distributed transactions are hard. Operations that span multiple services can’t use a database transaction. You need sagas, event sourcing, or careful compensating actions.
- Local development gets harder: running ten services locally requires Docker Compose or similar, and it’s never as fast as running one process.
- Observability is harder: a single request may touch ten services. Without distributed tracing, debugging a latency spike is archaeology.
- Deployments multiply: CI/CD infrastructure, versioning, and service contracts all need managing across every service.
- The operational overhead is real before the team is large enough to spread it across autonomous teams.
The correct version of microservices is: each service is owned by one team, deployed independently, and has a well-defined contract. The incorrect version (common) is: microservices as a way to organize code for a team of five, producing distributed monolith problems without the benefits.
How to choose
Start with a monolith. Unless you have a specific, justified reason not to, start with one deployable unit. This is not a cop-out; it’s the advice of engineers who’ve built systems both ways and paid the distributed systems tax on systems that weren’t ready for it.
Modularize early. Before you have a reason to extract services, establish module boundaries in your monolith. Define what owns what. Keep module interfaces explicit. This has no operational cost and pays dividends as the team grows.
Extract when the monolith becomes a bottleneck. The right signals for extraction: different parts of the system have fundamentally different scaling requirements; different teams need to deploy independently and are constantly blocked waiting for a shared release; a specific component needs different infrastructure (GPU for ML, specialized storage for time-series data). These are real signals. “The monolith is getting large” is not.
Never extract prematurely. The cost of a premature microservice is underestimated until you’ve built one. Merging two services back into one is painful. Starting as a monolith and extracting later is almost always easier than the reverse.
The architecture should match the organizational and operational reality of your team. A three-person startup with a microservices architecture is not sophisticated - it’s encumbered. A 200-person company with 50 independent services deployed by autonomous teams is exactly what microservices were designed for.
The pattern is a tool, not a virtue.