Data Hoarders Must Embrace Analytics

The conservation of quantum information theory states information can neither be created nor destroyed. Stephen Hawking used this theory to explain how a black hole does not consume photons like a giant cosmic eraser. 

Outside the realm of quantum mechanics, we have the physical world of corporate offices. In this world, information is collected, curated, and consumed at an accelerated pace with each passing year. There is, however, one similarity between these two distinct realms: Data is never destroyed.

We are a nation and a world of data hoarders.

System administrators are keen to collect as much diagnostic information as possible to help troubleshoot servers and applications when they fail. The Internet of Things has a billion devices broadcasting data to be consumed easily through APIs. Developers obsess over metrics, logs, and events, hoping each byte collected brings them one inch closer to the nirvana promised by AIOps and observability. 

Data hoarding results in an accelerated amount of ROT (Redundant, Outdated, and Trivial) information. And that data prevents us from effectively maintaining and monitoring our systems, especially our networks. 

Stop the Madness

So, we are at a point where we recognize we need more data, reducing our ability to maintain and monitor our systems and networks. These networks grow more complex with time, forcing administrators to collect more data, hoping it proves helpful. Soon, the volume of data outstrips our ability to work with the data, use it for analysis, or provide a basic understanding of what is happening. 

Data hoarding results from the “unknown unknowns,” events you cannot plan for because you’ve never seen them before. System administrators hoard data because they have no idea what data they need. Corporations today are drowning in infrastructure monitoring data. Their legacy monitoring and observability tools cannot keep up with the increased volumes of data and often cannot provide accurate root cause analysis. 

We have two choices. Either we reduce the volume of data collected or find a new way to use it. 


We are not going to break our data addiction anytime soon. Thus, we need to find new methods to apply to the volumes of infrastructure monitoring data collected. Traditional legacy monitoring tools focus on collecting data, storing the data in a data warehouse, and presenting the data in dashboards where the end user needs to ascertain if the data shows if an issue exists or not.

These legacy tools should support administrators with three key goals – collect data, correlate metrics, and help teams collaborate on resolutions and outcomes. However, the growing volumes of data prevent these legacy tools from achieving any of those goals. They don’t scale to the data volumes necessary, they miss correlations, and they result in “dashboard fatigue” as teams struggle to find solutions to issues.

Modern network observability solutions will solve all three goals. These next-generation platforms don’t just ingest data; they use the data to build machine-learning models to find correlations between events legacy tools and humans cannot see. 

Tools built with AIOps in mind will scale along with the infrastructure data. At every step of the process, machine learning algorithms are applied to the data to help filter through the noise, finding the hidden signals. The result is better collaboration, as the siloed infrastructure teams need fewer dashboards to understand the issue and formulate an action plan. 


Nobody goes to school to become a data janitor. But here we are.

We will not be kicking our data-hoarding habits anytime soon. Therefore, we all need to start thinking about how to apply advanced analytics, machine learning, and AIOps to help gain actionable insights from our data. 

Faster resolutions and better outcomes lead to every company’s goal: happy customers.

Author's Bio

Thomas LaRock

Principal Developer Evangelist, Selector
Thomas is a highly experienced data professional with over 25 years of expertise in diverse roles, from individual contributor to management. He is passionate about simplifying complex challenges for others and leading with empathy, challenging assumptions, and embracing a systems-thinking approach. Thomas has strong analytical reasoning skills and expertise to identify trends and opportunities for significant impact, and is a builder of cohesive teams by breaking down silos resulting in increased efficiencies and collective success. He has a track record of driving revenue growth, spearheading industry-leading events, and fostering valuable relationships with major tech players like Microsoft and VMware.