This is one of the most important principles in the world of Microservices.
This has a direct impact on the ability of an organization to roll-out new changes in less time and with more accuracy.
The idea here is to be able to deploy a new version of any part of the functionality in isolation such that the consumers of that service won’t even know that something has changed.
In the image below, S3 has changed without the notice of S1 or S2. Other services won’t even know that something has changed.
What is the benefit of Independent Deployment?
Well, the benefit is quite evident. But to make it more clear let’s try to prove it via negation.
Let’s say you have 5 microservices in your network (this number is very humble, usually there are more than 30 or 40 microservices in a mid-size organization easily) and they all share a tight coupling between each other.
So, what will happen if we try to make a change in any one of the services?
There are many things that can be happening here.
- If all services are deployed on the same box then they all will experience downtime even when you have to deploy let’s say S3.
- If the deployed changes in S3 breaks the S1 (or any other service) then there could be cascading effect which will eventually compromise the entire system. Since each application is dependent on one or the other application.
How can we tackle these problems then?
Let’s look at the solutions one-by-one…
One Service Per OS
When organization were starting out with microservices, it sure felt like a nice thing, to put multiple services on a single OS/Box/Container. This sure had benefits in terms of maintainability; like they don’t have to maintain each service individually and cost reduction in the infrastructure.
But once the virtualization was made easy; with the tools like Docker and Kubernetes; deploying one application per container became the obvious choice.
Once you have isolated your services at the host level then you get the freedom to do a lot of things.
One immediate benefit is that the technology is not a barrier anymore. You can have a completely different environment for different microservices.
You could choose Java for one service and Python for another. There is no restriction.
Another benefit is in terms of maintenance. Since different teams maintain different services the maintenance is also no longer an issue.
Plus you get the ability to deploy each service independently without touching any other service in the ecosystem.
This is an obvious benefit and that’s why more and more organization are adopting cloud platforms because the cloud makes provisioning easy.
Once each service has its own home, scaling becomes easy.
Usually, there is one favourite service which gets the most calls. This happens almost all the time.
That service requires more resources than any other service in the network. And if that service is deployed in its own container, then resources of that container can be increased in isolation to the other services.
This not only helps to scale better but also reduces the cost of the infrastructure.
You are only feeding the service that requires it. Every other application will keep on working as usual.
Here’s my Personal Experience On This
Recently I got a chance to work on a legacy project. This project was huge. We had to migrate more than 34 components to the cloud. And it was not very straightforward because these components were a part of a huge monolith.
The reason they wanted to migrate to the microservices architecture was because of the ease of development and maintenance.
Also, these codebases grew so big that it was difficult to roll-out a new feature on the platform quickly therefore time to market for any new feature was bad.
Okay, so let me give you a very high-level overview of the kind of structure that they followed:
Scaling was the obvious problem with this architecture.
There was one service that consumed 80% of the resources on the server.
And if there is a sudden surge on the website traffic, the service starts to consume even more and as a result, the other services on the same host has to suffer. They don’t get the sufficient resources and as result, the overall latency of the system increases.
Now, if these 3 applications were microservices deployed on their own host with dedicated resources. Then we could have easily increased the resources of that one server without changing anything for the rest of the services.
This way everyone would have been happy; also the load need not be shared among the services. They all get dedicated resources.
After converting the monolith into microservices we were able to accomplish that.
The overall structure looked something like this:
This structure is so elegant and solves most of the problems automatically.
Definitely, if you would have asked me to build a similar structure in the 90s then I would have never recommended this. But with the cloud computing and virtualization, spinning up a new server is completely automated. And it only makes sense to adapt to this new world as early as we can to reap the maximum benefits.
There are benefits of microservices but as uncle ben says –
So even though we have found the power to replace any component individually without affecting the other components in the network, there is a need to be extra careful of the breaking changes.
For example – Let’s say you’ve broken down your monolith components into their own microservices.
But what if you have to make a breaking change in the inventory service. Now, since these two services are no longer maintained by the same team and don’t even share the same code base, there needs to be a certain contract between them.
A contract that will make Shipping believe that the Inventory service will behave in a certain order.
For that to happen there needs to be a written contract; certain expectations that Shipping Service has from Inventory Service. These expectations/contract should not be broken at any time by the inventory service.
Before rolling out any new changes the inventory service should respect the shipping service expectations.
Inventory must run all the test cases (expectations) once to see if everything works as expected. And if something breaks they would know where they made a mistake.
Now, what should inventory service do if it has to make a breaking change?
Well, inventory service will have to inform the Shipping Service that they are planning to roll-out these new changes which will break their existing expectations. So, they would ask Shipping to be kind enough to set new expectations so that they can continue with their business uninterrupted.
Here, we actually trust the consumer to do a good job in writing the test cases. And also they trust us to run their test cases before rolling-out a new change. It’s a mutual contract.
Multiple Service Versions
There could be a scenario where you do not have control over your consumer. And that you have to roll out a new breaking change.
This kind of situation arises when you don’t know who your consumers are. A very good example would be Google or Facebook apis that we use in our applications to leverage their functionality such as login, firebase etc.
In these scenarios, contractual changes won’t work. Because the API is open to all.
This is where Co-existing service versions come in handy.
Let’s take a real-life scenario where you are consuming 3rd party analytics service.
The analytics service releases new changes to the market with more advance features but it contains the breaking changes for V1 consumers.
So, instead of changing the V1 service, it would roll out all its changes under a new endpoint (V2). Once it gets new users on V2 and it knows that V2 is stable, it would announce the end date for V1.
This is the time when it would ask all its consumers to move to V2 from V1 at their own pace. This would give ample time to the consumers of the analytics service to adapt to their new APIs.
This is how a third service releases a new service because it can no longer make the changes in the existing service without breaking the user’s contract.
In this article, we covered various aspects of deploying the microservices independently. One Service Per OS model works great when in achieving independence in terms of scalability and independence.
Then we talked about the Consumer-Driven contract between microservices which enables the smooth functioning of the overall system.
And at last, we talked about the Multiple-service endpoints in case the service wants to roll out the breaking changes.
I hope you enjoyed the article and if you have anything to share or talk about, then do comment below. I’m waiting to start a conversation 🙂