Microservice Architectures and Containerization

by | Apr 28, 2018 | Blog |

In this post I want to talk about microservices architectures and containerization. Now I don't want you thinking I've abandoned my posts on modelling "I'll be back." I'm currently working on a small project for a client that consists of three tiers, your typical db backend, mid-tier business logic and a presentation layer (web based with browser and mobile channels).

Previously when working with this client, we developed a proof of concept model using an AWS cloud platform (EC2 instances and AWS storage).

It has to accommodate about 300 simultaneous users (not a great number), but it’s a dev environment, so the servers are going to get hammered by partially working code that may or may not exit gracefully, not a good thing as it will create resource leaks on the servers and a number of other issues but nothing that a reboot of these platforms can’t easily resolve (saying that, constant reboots can be a pain for everyone else).

I would prefer an environment where the 300+ developers can dev their code in an environment that is a replica of the deployment environment, so once their apps are deployed there will be no surprises.

I was thinking through the issues of costs, the runtime operations, and trying to decide on how best to build this prototype environment without incurring the costs of running cloud instances, including how to deal with instability from badly written and deployed code from 300+ developers.

I had looked at Docker some years back but never really made much use of it and wasn’t too sure how I could make use of it.

I didn’t want to install several local VMs which would have meant installing guest operating systems as well as all the software required for each tier. I didn’t want to boot my AWS EC2 instances as this would incur costs.

So, with cost and simplicity of dev, test and deployment at the forefront of my mind, I set about doing more research into containerization and a closer look at Docker.

The first thing that struck me was that each container was not a VM but was deployed onto something that seemed to be a VM type environment (but actually isn’t a VM).

So, something like the Docker Engine simply acts as a conduit through to the underlying host OS (it doesn’t provide an environment for installing a guest OS, though you can, but an environment into which applications/services with all of its components can be installed).

A container provides the ability to install a service that is executed within the Docker engine. Unlike a VM which provides hardware virtualization, a container provides operating system level virtualization by abstracting the user space.

Most of us are familiar with using VM software on our local machines, so in this model you are able to run multiple operating systems on a single host but that host must have a native OS installed.

Virtual machines have progressed greatly in the last 20 years and two primary models exist.

and

But containers take this to next level (can be said to have their roots in linux control groups)

Look carefully at the container architecture. Notice that each container abstracts the user space and is a hermetic unit. There is no cross contamination of containers.

So, what about a microservice? 

microservice is a suite of applications that expose a service like logging, printing, storage etc. Each microservice is a hermetic unit comprised of all the applications and their required libraries to enable that service.  Hang on a minute, isn’t this the same as a container? Absolutely. In actual fact, a suite a microservices configured in a particular way is a microservice architecture which is essentially SOA (Service Oriented Architecture) in an evolved state.

So how does this actually work? We take a large application that exposes multiple services and disaggregate the application into smaller components, each of these components delivering one and only one service. Each service is realised in a container (Docker, Kubernetes etc).

Even though containers are hermetic, they are fully capable of supporting middleware technologies of all kinds (Web Services, MOMs, RPC etc).

Cost is the hidden beauty

We were able to take a problem that required several virtual hosts (in the past it would have been three physical hosts), develop the applications that will run on each of these hosts, test them (not for interop performance) i.e. integration, functionality, individual performance, and security.

Once we were happy, we redeployed the containers on virtual hosts (configured to work with docker containers) and everything worked as it did in the dev and test phases. Costs in terms of hardware during dev and testing, zero (except for cost of hardware to host container which would have been there whether or not we used containers).

Downside

I wouldn’t call what I’m about to say as a negative. There were a few teething problems with IP values etc, but nothing that couldn’t be solved with the help of the internet. We found that our problems were not new.

Many of the problems we encountered forced us to think about how to ensure that the containers were hermetic and configurable through service points or configuration files.

Conclusion

Containers are an incredibly powerful way of developing, testing and deploying distributed systems. Our research has shown that Google makes extensive use of this technology. It is highly scalable, leads to faster dev lifecycles and gives devs the freedom to experiment in a safe environment (something goes wrong, kill the container and simply rerun the image).

This author will be making extensive use of containers in all future distributed systems projects.

This technical oriented article is a good addition to another blog post about microservices, published earlier on this platform.

Want to know more about our Enterprise Architecture offerings? Read more

About the author:

About the author:

Selvyn Wright

Selvyn has owned and run a consultancy and IT training firm for the last 22 years, connecting with clients and partners located around the world. He is TOGAF 9 certified and delivers TOGAF 8/9 certification training. He also delivers UML, Java (all forms), C++, CORBA (a speciality), distributed systems and other technical training. He is hired because of his ability to articulate complex conceptual ideas.

An Expert Guide to BPM

10% discount on a really good book that will improve your business architecture knowledge

More Posts:

Simplifying Processes with Decision Modelling

This post about the symbiosis between process modelling and decision modelling is the last in a series on business processes: from building a robust process architecture, via scoping a process, to accurately modelling and simulating a business process. As a reference,...

read more

Scoping a process using Process Architecture

An Enterprise Process Model provides a high-level, structured overview of an organisation’s set of business processes. It enables you also to identify candidate processes subject to business process change. When initiating such a business process change project...

read more

About Kipstor

Kipstor provides architecture consulting and managed modelling services and helps organisations develop quality, trusted and structured information that can be easily shared and disseminated to support better decision making and consistent communication.

Share This