Blog
-
Tech Talks

Microservices vs. monolith: why the right answer is "it depends"

Reading time: ca.
8
minutes

Microservices became the default answer to almost every architecture question for the better part of a decade. If you were building something new, you were building microservices. If your monolith had problems, you broke it up. The pattern had a certain inevitability to it.

That consensus is cracking. Amazon Prime Video famously moved a distributed system back to a monolith and cut costs by 90%. Segment did the same. And in day-to-day practice, teams are quietly discovering what Dimitri Mestdagh, technical architect at Optis, has seen firsthand: microservices solve real problems, but they introduce new ones that are easy to underestimate.

"There are many ways to get microservices wrong," he says. "And a lot of teams did."

Where the microservices hype came from

The original case for microservices was sound. Large teams working in a single codebase create bottlenecks. Some components need to scale independently, others don't. Shared code gets duplicated across projects. Breaking things apart into smaller, independently deployable services addressed all of that.

At the client where Dimitri has worked for years the initial push toward microservices came from a concrete problem: the same data was being fetched and reimplemented across multiple applications. Microservices were booming at the time, and they offered a clean solution. So the decision was made quickly, and the team moved fast.

Too fast, as it turned out. "We went directly to ten or more microservices. Looking back, I would have started with one or two and grown from there."

That story is not unique. Microservices got adopted by teams that didn't need them, at a scale that created more problems than it solved, driven more by industry momentum than actual requirements.

What actually goes wrong

The technical overhead is the first thing people notice. Each microservice needs its own deployment pipeline, its own compute resources, its own lifecycle. For a small team, that's a lot of operational weight to carry. But the subtler problem is coupling.

In theory, microservices are independent. In practice, they rarely are. "A change in one microservice often leads to changes in another," Dimitri explains. "You think you've got isolated components, but there's an underlying dependency. And then you're not getting the benefits you were supposed to get."

Communication between services adds another layer of complexity. In a monolith, one component calls another via an in-process method call. In a microservices setup, that becomes a network call: it can be slow, it can fail, and if you do it synchronously you've tightly coupled services in a different way. Go asynchronous and you've introduced message queues, event handling, and a whole new category of things that can go wrong.

Then there's observability. When something breaks in a monolith, the stack trace tells you where. When something breaks across five microservices, you need distributed tracing, centralized logging, and a clear picture of which service called which, in what order, with what result. Without that, debugging becomes archaeology.

The case for the modulith

This is where a newer approach enters the picture. Dimitri gave a talk on it and the term is gaining traction: the modulith.

A modulith is a single deployable application, like a monolith, but with hard internal boundaries between its modules. Module A and Module B can't just reach into each other's code. They communicate through defined interfaces, and increasingly through internal eventing mechanisms that mirror the way microservices talk to each other via Kafka or a message bus, just without the network.

"You get the clean separation, the defined API contracts between components, without the operational overhead," Dimitri explains. "And if you later want to extract a module into a proper microservice, the groundwork is already done. The interface is already there. It's mostly a matter of pulling it out."

This approach mirrors what many experienced teams now recommend: start with a well-structured monolith, identify the pieces that genuinely benefit from independent deployment, and extract those deliberately. Not the other way around.

When microservices actually make sense

None of this means microservices are the wrong choice. Dimitri’s client company runs a landscape of 20-plus microservices covering everything from ERP integrations to asset management to GIS data. For that kind of broad, cross-domain setup, the architecture makes sense.

"Authentication, personnel data, cost centers: those don't belong to any one application," Dimitri says. "They're truly standalone. Putting them in a microservice that other applications call is exactly the right pattern."

The key distinction is whether the logic is genuinely independent or just technically separated. A microservice that has to change every time a related service changes isn't independent. A service that manages a bounded domain with its own data source, its own lifecycle, and a stable API, that's what microservices are good for.

Scalability is the other real argument. If one component gets hit significantly harder than others, being able to run three instances of that service without scaling the rest of your application is a genuine advantage. But Dimitri is careful here: "Most of our applications ran a single instance. Scalability was never actually a concern. We had the infrastructure for it, so we used it when we needed it, but it wasn't the reason we chose microservices."

Be pragmatic. Look at your specific requirements and decide based on those, ideally with someone who's worked through these decisions before.

Observability is not optional

If you do go with microservices, one thing Dimitri is clear about: observability cannot be an afterthought.

The good news is the tooling has matured considerably. On Azure, Spring Boot integrates directly with Application Insights, automatically forwarding logs, metrics, and traces. You get a live application map and centralized logging across all services out of the box. For teams running their own infrastructure, the LGTM stack (Loki for logs, Grafana for visualization, Tempo for traces, Mimir for metrics) is open source, runs in Docker containers, and is mature enough to use in production. Check out our Tech Radar if you want to know more about the technologies we use.

"Spring Boot already sends a lot of telemetry data by default," Dimitri notes. "If you follow a standard configuration, you get quite far without much extra work."

The same applies to security. In a monolith, you authenticate once and you're in. In a microservices architecture, every service is a potential entry point. Dimitri's team handles this with OAuth 2 and the resource server model: each microservice validates an access token independently, without ever handling user credentials directly. On Kubernetes, pods are isolated by default, so external access has to be explicitly enabled. Neither pattern is particularly complex, but both require deliberate design from the start.

How to make the decision

Dimitri's advice when a new client asks about microservices is to start by asking why.

"I wouldn't just say, here's one solution, apply it everywhere. I'd want to know: what are your pain points? What do you think microservices will solve for you? Because without understanding how your company works, your teams, your applications, you can't make a good decision."

The factors that matter: how tightly coupled is the business logic? Are there components that genuinely need to scale independently? How many teams are working on the system? Is there already a solid CI/CD pipeline in place? (Starting microservices without automated deployment, as Dimitri's team learned early, is a painful mistake.)

The answer is almost never "all microservices" or "pure monolith." Most realistic architectures land somewhere in the middle. Some domains benefit from independent deployment. Others are better off as modules within a well-structured application. A handful of truly cross-cutting services — authentication, shared data sources, central business logic — make sense as standalone microservices that other applications call.

"It's not a binary choice," Dimitri says. "Be pragmatic. Look at your specific requirements and decide based on those, ideally with someone who's worked through these decisions before."

That last part isn't a throwaway line. Getting the architecture right early saves a lot of pain later. Getting it wrong, and having to untangle tightly coupled microservices or extract badly structured modules, is expensive work that could have been avoided.

If you're facing this decision and want to talk it through, that's exactly the kind of conversation Optis is built for. Get in touch and let’s see which architecture fits your organization.

Black hexagon with a white YouTube play button icon in the center.

Dimi

17.03.2026

Dimi

Read the highlights of our blog

"Each project pushes our skills farther and expands our expertise"

We value your privacy! We use cookies to enhance your browsing experience and analyse our traffic.
By clicking "Accept All", you consent to our use of cookies.