How to Migrate From Monolithic Applications to a Microservice Architecture: Part 2
Have you read part 1? Click here to see it first!
How big is too big?
Having decided to reengineer the application on a microservice architecture, the team began the process of designing a scalable, maintainable base on which to rebuild edocr.
One of the first architectural issues we faced was a very common one in software design—how should the components be delimited and broken up? As with anything, our engineers had varying opinions on this topic and our outlook on the options changed as we iterated on the design.
From an object-oriented mindset, our initial approach was to separate each "class" into its own service. This resulted in a very comfortable context switch from the previous object-oriented codebase and the lines were already drawn, so to speak. Our initial design was to house all logic in an API service that could be exposed to our web application, various in-house mobile apps, and potentially the public at some future date. This layer would depend on services that acted as models for the various types of objects in the application hierarchy.
Based on feedback from the development and architecture teams through the initial process, we began looking at making the services even smaller. There are numerous benefits to breaking down to units of atomic functionality as separate services:
One of the core goals of a microservice architecture is the ability to avoid bloated, unmaintainable "legacy" code after a period of time. By depending on individual small services that provide a single service to the greater application, any one piece can easily be refactored or even replaced entirely with little friction to the overall product.
Code paths involving failing services can be avoided. With a large number of interoperating services in the ecosystem, there will invariably be times when something doesn't go as planned. By having each small piece of functionality in its own service and writing good error handling when talking between services, any unavailable services can be worked around or skipped if the application logic allows for it.
Acceptance testing of individual services can be broken down to small, manageable test sets. By only coding a service to a very small public interface (often one or two endpoints in our current case), the scope automated testing of the functionality becomes much narrower.
One of the core concerns with separating into such small services was the need for each service to talk to numerous other services to perform its processing. To facilitate this, we created a service routing component that takes in a JSON configuration file and provides native access to the registered services without requiring each developer to know anything about the implementation of the source service, including the port number or even hostname if the service is not local. While our code is currently all in node.js, migrating this library to any other language is trivial and allows us the flexibility to talk between systems, services, and languages without the boilerplate code of manual HTTP calls.
We kept the singular public gateway, which acts as a router and authentication handler for the underlying services. That API gateway and various utility microservices interact with the functionality-providing services via the routing component, leading to a decentralized application structure that aides in long-term maintainability.
In retrospect, it would have been beneficial to have spent more time vetting the decisions made in this stage prior to beginning implementation. However, in true agile fashion, the team was able to adjust course during development and get the code base to a place that is much easier to compartmentalize and maintain going forward.