What is the ideal holistic monitoring solution? Well, normally that would be to bring all your technical silos under one umbrella for operations. All individual technical silos would be monitored using point solutions or element managers. The umbrella would be the Manager of Managers or a monitoring framework. This has long been the envisioned solution for holistic monitoring.


If holistic monitoring is still a buzz, then why did the frameworks fail? First and foremost, the price was an important factor, but even more important is the fact that the frameworks where not really frameworks, but separate pieces of the same puzzle. This resulted in complex and difficult implementations and integrations. The pace of adaptation to the different technologies was slow and not suited for an ever-changing environment. Moreover, the vendors regularly invested in new components and the framework was never finished. This was recognized and tools with a more limited scope took the lead. Hence, away was the concept of holistic monitoring.


Following Gartner, the market has fallen apart in different monitoring domains. These domains are:
  • IT Infrastructure Monitoring (ITIM)
  • Network Performance Monitoring and Diagnostics (NPMD)
  • Application Performance Monitoring (APM)
  • Digital Experience Monitoring (DEM)
Even Gartner states: “Domain-based monitoring tools provide insight into issues within their realm, but typically are unable to present a holistic view across a digital service.”


With the above in mind, how do we create a holistic view on monitoring, adapted to the needs of the company? Before we can answer that question, one has to look at what needs to be monitored and how complex the IT environment is that has to be monitored. In fact, it is simple, you can create a holistic monitoring for one or a few applications and their infrastructure that do not change too often. So, it is not about holistic as such, but about the complexity of the IT environment that you want to monitor. The more complex, the more difficult to create a single pane of glass. This has been recognized by different parties and has led to the creation of APM tools, that provide an application view of all elements. Again, the more complex the application landscape, the less obvious to present the applications correctly in one view. So, with this in mind, what approach would be preferred. Again, here is no simple answer, as multiple options exist:
  • The domain approach: try to provide a holistic monitoring per monitoring domain. One can create a holistic view of the network, the infrastructure, the applications. Again, this approach depends on the complexity of the environment. Take into account that a network view needs to present also your network flows (end-to-end), the infrastructure needs to present all the components (also your cloud infrastructure) and you need to visualize all applications. This can already be a difficult exercise. What is missing are the cross-domain dependencies between the different components and you will have overlap between the different domains.
  • The dashboard approach: Instead of trying to view all systems of your environment, the systems can be brought together in availability and performance dashboards that present the different domains. Again, this will not give you a holistic view, but it will give you an idea of where things go wrong. This would be a more balanced approach for large environments.
  • The one service desk: all your tools in the different domains write to the same service desk and alarms are treated as incidents. This will give you an overview from the incident point of view, but you will only shift the problem from one tool to another. Dependencies can be integrated using a CMDB, that in itself needs to be maintained. Only a decent integration between the CMDB and the monitoring tools will help here.
  • The application first approach: you start by modelling your applications using APM tools. They promise the Full Stack, but in general are limited to the underlying OS and application frameworks. Again, this has to be combined with the underlying infrastructure, but, for an environment with few applications, this might work.
  • Monitor your end-users: instead of starting from the application or the infrastructure, you can also start from the end-user. If they are happy, then all is ok, but … only user experience will not be sufficient to detect underlying issues that might affect all users. You still need to monitor all other elements to have a complete picture.
So, one solution of the above might not be sufficient, but a balanced approach, consisting of different options together might give you a satisfactory holistic view of your IT environment.


The days of the monoliths are over, and the era of the point (domain) solutions is back. So, what are the tool directions and which tools might be of help? First, there is no question about the maturity of the tools. Most tools do what they should and provide the correct outcome, if the input and the handling of the input is correct. There are sufficient tools that provide a solution per domain, however, there are few tools that provide a solution for one or more domains, and if they do, it comes back to the integration issues of the frameworks. So, I will not provide a list of tools, but what is required to provide a holistic monitoring of sorts:
  • Monitor all your domains (see the domains by Gartner): either by specialist tools, more integrated tools or open source.
  • Discovery first: choose tools that provide discovery and layer abstraction to link all the dots. Take into account that discovery might be difficult in some environments, depending on segmentation, firewalls and policies. But, to have a clear picture of your environment, discovery is a must.
  • Visualization: you are interested in a correct visualization of your environment, so the output of your discovery should be easily visualized. This could be per domain or cross-domain, this could be grouped following your needs and could be grouped cross-domain.
  • Link all the dots: here I would like to introduce tools that can-do data analysis like artificial intelligence or data mining tools. The goal of your holistic monitoring is not to have a nice picture, but to be of value for your IT operations. The value really comes when the information presented is correlated into real causes or point of causes. From there the step to the root cause can be easier. This would be called AIOps by Gartner. These tools are emerging and might be promising, but do not forget that their value lies in the quality of the data, so, back to square one: monitor all your domains.
Again, go for a balanced approach and choose what is best for your organization.


The following would be general guidelines to start the road to holistic monitoring:
  • You need the correct mindset, you want to get there
  • Best choose a practical approach and test your approach
  • Think general and act specific where needed
  • Think big, start small
  • Choose the right tools, there are no bad tools, but some are more complex than others and everybody wants to sell you their tools
A good approach will lead to a correct implementation, but there are a large number of pitfalls on your road.


There is no such thing as a free lunch, so:
  • Be realistic about the outcome
  • Define your budget beforehand and verify how far you can go
  • Go for the low hanging fruit when deciding what to do
  • If you do not know where you stand, perhaps do a maturity study to define your roadmap


Holistic monitoring is probably for the happy few who have a very structured IT environment, and thus automatically avoid complexity. For the majority, it is more like a puzzle where perhaps not all pieces fall into place or where a few pieces are missing. Our aim was to give you an overview of holistic monitoring and to give you an idea of what is possible for what kind of organizations. Every organization has to decide for itself where it wants to go with monitoring and how much effort it wants to spend on holistic monitoring.