While the primary goal of Microservices is to meet increasing business demands in a time of full-blown Digital Transformation, if you don’t adopt a new approach to monitoring you may end up chasing your tail.
One of the prominent practices to emerge with the advent of DevOps is the Microservices approach to designing and implementing software.
“In short, the microservices architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API.” — Martin Fowler
The era of Digital Transformation means that with Business in the IT driver’s seat, priorities have shifted and IT operations are focused on supporting business-driven KPIs like speed, agility and continuity and to grant organizations the ability to achieve and maintain a competitive edge and longevity. Microservices was developed to enable software developers to keep up with business demands (AKA to move fast) and to improve their daily working lives.
But on the most part, Microservices is emerging as a double-edged sword. While much time can be saved in developing, updating and rolling out solutions, that same time can be squandered in an effort to monitor their performance during or once they are implemented. The move from a monolithic architecture to Microservices means that your entire stack is now broken down into many, many components and connected through APIs. Logs are now coming at you from an even greater number and type of sources and there is more room for error. Once an error does occur somewhere across your systems, it’s extremely challenging to follow its trail of cause and effect.
Much is being said of the upsides and downsides of Microservices and its inherent challenges, but what has been clearly established is that for a microservice environment to come into fruition, monitoring requires redefining, revamping and rethinking.
In 2014, Martin Fowler listed “Basic Monitoring” as a Microservices prerequisite explaining: “With many loosely-coupled services collaborating in production, things are bound to go wrong in ways that are difficult to detect in test environments. As a result, it’s essential that a monitoring regime is in place to detect serious problems quickly. The baseline here is detecting technical issues (counting errors, service availability, etc) but it’s also worth monitoring business issues (such as detecting a drop in orders).
Like many other evolutionary steps in IT, Microservices has been driving the materialization of a range of tools to deal with its residual effects (AKA its challenges), and some companies are even successfully tackling the issue of error and root cause detection but reaction times are still lagging way behind business’s schedule.
The good news is – Artificial Intelligence. Monitoring with tools or platforms that are founded on true AI capabilities means that your monitoring can be essentially container-agnostic. This is because a system like Loom Ops, that deploys Artificial Intelligence in IT, does not monitor the same way that traditional tools do; it will not simply send you periodical logs and help you uncover anomalies, it will monitor performance and enable you to track a problem in performance to its root cause.
For organizations embracing microservices, monitoring this way is the only way that makes sense because, with containers, your root cause could be virtually anywhere in your network and over time could even be related to a container that no longer exists. When monitoring with an AI-platform, the data you are looking at can be presented in context to your company goals and structure, a context that your business people have dictated and decided to monitor, and your “logs” will be directly related to service performance rather than IT operations and presented as answers that are comprehensible to business.
AI-platforms can understand environment dynamics and take monitoring another evolutionary step forward, they can tell you which actions are recommended to reach the desired result. What’s more (and there is so much more), AI is also “scale-agnostic” as there is no limit to the amount of data that the machine can process at any given moment.
Deploying a central AI-powered monitoring platform brings back all that was convenient in the monolithic era; it gives you the uber-visibility required to perform optimally in what will only continue to be a highly complex, ever-growing environment.
A follower and member of the global IT community, and an honored member of its Ops division, I find the evolution of technology fascinating. But what I find to be even more intriguing is that at the current pace at which technology is evolving we are actually witnessing evolution as it occurs. We can (and should) welcome in new species on an almost daily basis and we will watch the fittest survive. History has taught us that those who are most responsive to change and can quickly adapt will always come in first. Guess who I have my bets on?