Principles Of Microservices: Culture Of Automation11 min read
- In Programming
This article is written in a succession of the previous article that talks about the first principle of microservices i.e. modelling of microservices around a business domain.
Once you have modelled the microservices around your business models and you are confident that you have done a good job, then the next step is to focus on automating your infrastructure.
Automation is very important in the world of microservices.
Since there are multiple services that have their own lifecycle and interdependencies. It is much more important to automate the entire pipeline of integration and deployment.
Principle 2 – Culture of Automation
Deploying a monolith application is a fairly straightforward process. You have everything packaged into one and boom the app is deployed. But with Microservices, it’s a whole another story.
With the increasing number of microservices in an enterprise with their interdependence can make it a complex process and a developer’s life miserable. That is why we need to leverage the technologies that allow us to automate all of this. I’m talking about a culture of automation using CI and CD.
CI became the need of the hour with the advent of microservices architecture.
With CI, the core is to keep everyone in-sync with each other by making sure that the newly checked-in code integrates well with the existing code.
This is done by performing a series of verification steps performed on the newly checked-in code to see if the code compiles successfully, whether the test cases are green and more… If you are not using CI, then working with multiple teams and multiple services is going to be painful and nasty.
Benefits Of Continuous Integration
There are many benefits of Continuous Integration:
- Faster feedback – You can the feedback within minutes whether the code that you have checked-in integrates well with the existing code.
- Code Quality – You can attach Continuous Inspection tool such as SonarQube for that will check the quality of your code and also let you know any loophole or possible security vulnerability.
- Automatic Artifact Generation – It will automatically generate the build artefacts and preserve it. This way you can perform a series of verification and integration testing on the same code. This is really helpful as every artefact have a unique version number attached so you can be sure that the testing is performed on the same artefact which was generated.
- Recreate the Artefacts On-Demand – Every build is version controlled. So in case you need to go back to the old build artefacts, you can do it easily. It is as easy as navigating to the old version and build again.
- Keep Track of All The Stages – You can keep track of all the stage that is performed so far. This gives you a clear picture of the process; Build is a success or not; artefacts generated or not, artefacts uploaded successfully or not; Which tests were performed etc…
Guidelines to Ensure Smooth Integration
There are certain guidelines that we follow in ThoughtWorks that ensures that the integration is smooth.
- Check-in code to the mainline end-of-everyday – Checkin the code that you have worked on at the end of the day.
- Have a suite of tests to validate your changes – There is no use if the code is syntactically correct but disobeys the behaviour of the application. So always make it a habit to write a unit test for the changes you make. Following articles will help you to start with Test Driven Development.
- Fixing the Broken Build Should Be Your Number One Priority – The mantra is – Never leave the office with a broken build. Once more changes are pushed over the broker build the harder it gets to find the bug and resolve it. So make sure you fix the broken build first and then carry on with the development.
Mapping Continous Integration To Microservices
This topic is so vast that it could be a chapter in itself. But I would keep it short and crisp and only talk about different mapping schemes that are used at different size enterprise.
Let’s divide the mapping schemes based on the size of the organization as small, medium and large.
I’m mentioning this technique for small organizations but I have personally seen this model getting used in some of the massive code bases as well.
The small enterprise is the one where there is less number of teams (1-3-5).
There is the simplest solution for such an enterprise and that is to use Single Source Code Repository and CI build for all different microservices.
It looks like this:
It has a very simple build model.
Just one monolithic build for all the microservices.
Any check-in to this source code repository will trigger the build for all. It will produce artefacts for all the microservices tied with the same build number.
As you can see in the image all the artefacts are tied to the same build number
A very simple model; Fewer Repositories to worry about; And conceptually simple build;
One commit is all that a developer has to worry about at any given time.
As I said, this will work with a single team in a small enterprise. But with bigger enterprise comes more microservices and more teams. Then this model has significant downsides.
Downsides of this model
- More Build Time – A single line of change in any of the service builds all three services.
- More Artefacts – Change in any of the services will result in the generation of artefacts for all the services. This consumes more disk space.
- Deployment Logics Become Complex – Identifying which service is changed and deploying just that service requires complex logic. You will have to read the commit messages, compare them to see which is changed and then deploy the changed one. It is better to deploy all three artefacts together but that is specifically what we need to avoid.
- Single Point Of Failure – If anyone microservice is broken then it won’t allow others to build either. This will hinder parallel development.
- Difficult for Multiple Teams to Work Together – As every team is dependent on each other, it is hard to manage multiple teams.
This approach is a slight variation of the approach we talked above. In this approach, there is a single Repository with all the code with different CI build settings.
This is achievable with a well-defined structure. You can link different parts of the source tree with different build pipelines.
This approach seems to be better than the one we discussed before. Because now every build pipeline is different. So, if user service is broken, it wouldn’t have any effect on the Catalog service build.
Another advantage of this approach is that it deals with only one repo. So, it is easier to check-in/out code. All you have to maintain is the single repository. But this is a blessing in disguise. Because as much it enables the developers to check-in code frequently, it also enables them to push the code linked with more than one service in the single commit.
And this happens more than often.
But, don’t worry there is a third approach as well.
I would recommend this technique for building and deploying the microservices.
In this, each microservice have their own codebase. They have their own build and test pipeline. If I make any code changes than I only have to worry about just one microservice, one artefact and I get clear ownership of it.
If you want to take a step further, you can also keep the associated test cases in the same repository. This way you will know which test is associated with which microservice.
So far we have covered just one part of the Automation i.e. Continous Integration. Let’s look into continuous deployment as well.
As your artefact moves through different stages of the CI & CD, it will be deployed into different environments.
Most of the times, when you are working for a big client or in a distributed architecture, there are more than 3 or 4 environments where you will have to deploy your code and test.
Talking in broad terms, there will be 3 environments at least; dev, qa and prod.
In most of the legacy systems that (that I’ve worked on), it has some automation tool like Chef or Ansible that takes care of prepping the environment for the applications.
Those tools have all the configurations which are required by your application to run such as JVM, JDK, Tomcat server, configuration files, user setup and so on. It sure sounds easy, but maintaining that configuration for all these environments itself is a cumbersome task. And it is very important to have some kind of policy such that every configuration that is made to those templates goes through the pipeline with all the important checks in place.
But that is about prepping the environment (host server) on which your artefact is going to run.
Now imagine what would happen if you would have to build different artefacts for different environments. If this is your build strategy then you have opened yourself into a lot of confusion.
You will have to keep track of every build. You might also want to create a different organization for your artefacts. Such that the dev related artefact will be published to the dev artifactory, the stage will be published to stage artifactory and so on… On top of that, you have to maintain the last working version on all the three environments. For that, you will have to come up with another maintenance strategy.
Environment Agnostic Build Artefacts
This can easily be achieved by decoupling the configuration from the code.
The only difference between these environments is the configuration values such as user credentials, database connection strings, service URLs etc… if you can externalize these properties in your application then you can have a single build artefact that can be deployed anywhere.
Functionality doesn’t change across environments
Every language has some kind of build artefact associated with it. Like Ruby has gems, Java has JARs and WARs and Python has eggs. These artefacts are just the bundle multiple files that are executed on the runtime.
These artefacts only need a suitable runtime to execute. These artefacts need not know about the environment. It should be the job of the environment to provide them with the required information at runtime. This way these artefacts can become environments agnostic.
There are two ways of providing this environment-specific configuration to your application –
- Set the environment variables before deploying the artefact. This can be done using your deploy tool (Jenkins, GOCD, etc). You can create a different stage for setting the environment variables. These environment variables can be read from the external vault.
- Another way of providing the config is via cloud-config servers. Spring Cloud Config is one such service that is responsible for providing the configuration values to the application at runtime.
This way you can have one build that can be deployed anywhere.
These are the main aspects that need to be considered before automating your deployment pipeline.
In this part of the series (Principles of Microservices), we talked about the Culture of Automation that is very essential when it comes to microservices.
Every enterprise will have more than 10 or 20 microservices (and this number keeps growing) and it can very easily overwhelm the developers if proper automation techniques are not in place.
The above-mentioned techniques will help the organization to keep a consistent development environment where developers will be able to focus only on code.