A Framework for AIOps Data Collection and Access: Multi-Vendor Proof-of-Concept Demonstration

ONUG’s AIOps working group was created to discuss how to manage data and control access for AIOps vendors. The group investigated different approaches to data collection, storage, and access. The panel discussion which occurred at the ONUG Fall 2020 event outlined their reference solution and asked for organizations willing to help with a proof of concept. 

The panel, chaired by Alex Henthorn-Iwane, presented an overview of their work, encouraging session participants to feel free to comment or ask questions. The panel included:

  • Bob Friday, Mist
  • Tim Van Herck, VMware
  • Russ Currie, NetScout  

The session opened with a definition of AIOps:

AIOps is the application of AI technology to streamline operational workflows and automate critical operations in hybrid, multi-cloud environments where scale and flexibility are outstripping the capabilities of human operators to keep pace in real-time.

A possible use case was explained to provide a point of reference for session attendees.

A remote employee complains of the poor-quality video when trying to participate in a Zoom meeting. The employee establishes a wireless connection to a router that communicates with the ISP. The ISP connects to the enterprise; however, there are overlays such as VPN or SD-WAN. The enterprise network connects to Zoom Web. The number of nodes between the key points is unknown. 

How do network personnel begin to address the complaint when there are so many possible trouble spots? Could an AIOps framework help with data collection and analysis? Would an automated solution resolve the problem?

Data Collection

The first step is to look at the data, but it’s spread across the enterprise, and there’s a lot of it. How do you decide what data to retrieve? Traditional approaches would be to pull all the data from each data source.

That might involve a device, system, and application logs. Given the complexity of today’s networks, most enterprises would have a number of sources for every problem.  If all the data is pulled, where is it stored?

Storage space becomes expensive, especially when the majority of the data is never accessed. In legacy environments, fixed-length files could result in a lot of blank fields.  Even if you can store the data, how do you process such large data sets?

A proposed alternative is to look at the problem to be solved and identify what data is needed. Then extract the information using REST API or other data access tools. Once the data is collected, it can be stored in a repository for baseline troubleshooting data.  The data can be abstracted and virtualized for use.

This process builds a baseline of troubleshooting data that ultimately could define what data is needed to address specific problems. Once the data requirements are defined, the process can be automated. Eventually, the data sets could be used to support AI applications.

Access Management

Organizations have asked for policies to control access to data. AIOps vendors, for example, need access to data. Enterprises may require data from third-party providers such as WiFi or SD-WAN. How does this exchange of data happen while maintaining privacy laws from multiple jurisdictions?

Secondary to how to allow access is how often is access required. When a vendor says they need real-time access to data, do they mean every five or ten minutes? Do they need access in seconds? The frequency plays a role in how data is processed at the enterprise and how access is controlled. 

If data is needed every 15 minutes, an organization could deliver a file in a predetermined format by placing it in the cloud and notifying the third-party that the file is present. Alternatively, an IAM role is created and shared with the accessing party.

Data Catalog

The working group created the data catalog concept as a way to manage what will probably be an ever-growing number of vendors wanting access to enterprise data. The data catalog would contain:

  • One entry per vendor
  • List of available data sets
  • Description of each data set
  • Method for accessing the data
  • Data Format

Creating a centralized repository for basic information reduces the time an enterprise needs to spend updating every vendor on what is available and in what format. A mechanism would need to in place for notifying vendors of any changes to the data formats.

Other Considerations
After outlining the reference solution, the panel offered additional items for consideration:

  • Data Ownership and Privacy
  • Baseline vs. On-Demand access
  • Data and Time Rationalization

Data and time were partially addressed under access management. Any further discussion of the relationship between how fast data can be provided versus how quickly vendors require it will need to wait until a valid use case is available. The type of access also needed a valid use case before additional discussions.  

Data ownership and privacy issues were touched on under data collection. Questions such as who owns the data once a vendor collects it will have to be addressed and safeguards put into place for proper governance. With the implementation of the GDPR (General Data Protection Regulation), organizations will have to be aware of the regulations to ensure they are in compliance throughout the EU (European Union) and EEA (European Economic Area). Tokens were suggested as one way to remove any confidential or private information before access is given.

More Information
Establishing a framework for the exchange of data for AIOps vendors is crucial to cost-effective implementations of AI in the enterprise. Without a framework, companies will find themselves reinventing the wheel for every customer, which increases cost and time to market. If you are interested in participating in this working group, contact us. Otherwise, look out for details on the upcoming ONUG Spring event, taking place May 5-6 to learn how a framework can help your enterprise.

Author's Bio

Guest Author

guest author