Hyperautomation and The Technological Earthquakes That Created It

A turn of phrase I like quite a bit is describing something as a “quake” blog, book, movie, or tech. It refers to a ground-shifting idea or concept that changes my worldview. The three biggest “quakes” I have had in my 25-year career were Linux, Puppet, and GitHub Actions, and their stories are all interwoven with me personally and the technology community at large. 

On Wednesday, May 17th at 3:30 PM CT I will be teaching and training the attendees of ONUG Spring 2023 on GitNops. An overview of the tutorial can be found here and will cover my personal approach to Hyperautomation for network operations. In addition Tom has a weekly webinar on Mondays at 12PM ET, and then office hours from 1-3 PM ET. The webinar can be found on StreamYard and the previous webinars can be found on YouTube. The Zoom link to the office hours changes each week but, the May 1st one with Sounil Yu the creator of the Cyber Defense Matrix is here. If you would like an invite to the office hours, please email Tom at mcgonagle@gmail.com or fill out the form here

Hyperautomation is a new phrase and a focus of one of ONUG’s working groups. Gartner’s definition of Hyperautomation is: 

  • Hyperautomation is a business-driven, disciplined approach that organizations use to rapidly identify, vet and automate as many business and IT processes as possible. Hyperautomation involves the orchestrated use of multiple technologies, tools or platforms, including: artificial intelligence (AI), machine learning, event-driven software architecture, robotic process automation (RPA), business process management (BPM) and intelligent business process management suites (iBPMS), integration platform as a service (iPaaS), low-code/no-code tools, packaged software, and other types of decision, process and task automation tools.

I have been personally very hyper about automation for my entire career. So much so that my 80-year-old mother knows more about computer automation than she probably should. It has been a constant subject of conversation for me since at least 2007. Now that I am in my mid-forties, I have chilled out socially, but my desire and interest to automate the world, especially the networking world, has never been stronger. My personal hyperauatomation story is below.

My GitHub handle is mcgonagle and my short little self-description is “Livelihood based on open source since Linux kernel 2.2.5-15”, which means that I discovered and have been involved with Linux since early 1999. I discovered Linux at a startup called e-Dialog where I interned. My university prioritized internships and playing Football and working and school was hard, to say the least. e-Dialog was an email marketing company and was dependent upon fileservers for their mail merge operations. Samba servers provided the file storage for the hundreds of thousands of images that were being merged into the emails. 

The Samba project was revolutionary. Andrew Tridgell while a PhD student reversed engineered the Server Message Block (SMB) protocol and the open source project Samba.org. This reverse engineering has been described by Tridgell as learning French by sitting in a Parisian cafe and listening to others’ conversations. Samba was free and open source. One of the senior engineers on the e-Dialog UNIX team was a core Samba developer. Samba and Linux allowed e-Dialog to get started, save precious startup dollars (I heard in 1999 money we were saving $500,000 in licensing), and completely blow the mind of a 19-year-old Football playing intern.

I instantly decided I wanted to focus on UNIX/Linux and was recruited to work in the Air Traffic Control Management data center when I graduated in 2001. Fast forward to 2005 and due to some network poking by Tridgell, the Linux project lost access to Bitkeeper. This series of events prompted Linus Torvalds the creator of Linux to start the Git project. Git’s differentiation from previous Source Code Management systems (SCMs) is that it is “distributed”. Its innovation was that everyone has a byte for byte copy of the repository. It allows you to solve the problem of needing to work on the Linux kernel on a plane. Upon landing a Linux kernel developer can sync or “push” their changes back up to the Git repository via a differential sync. Torvald’s has said he designed Git to be a like a file system due to his background in operating systems. Similarly Tridgell’s doctoral work produced the “rsync” tool which was the first to be able to do differential binary syncing. I currently work at GitHub and have benefited personally and professionally from these developments and projects since 1999. 

The greatest gift that Linux gave me was an appreciation for the UNIX design philosophy that prioritizes “composability”, which refers to programs snapping together like Lego blocks. A UNIX cli example is as follows:

find . -type f -name “*.txt” | xargs grep “search_term” | sed ‘s/search_term/replacement/g’ | sort | uniq -c | sort -rn > results.txt

In UNIX the pipe character “|” is feeding the output of one command into the following command. This allows you to daisy chain all the commands together into a much larger, much more powerful tool. This “contract” between each program surfaces in GitHub Actions, which I cover after the next section. 

My second greatest technology earthquake was Puppet in early 2007. I recognized it instantly as an automation framework that would change the world. We had been using something similar but harder to use and not portable in the FAA’s Air Traffic Management system. It was this experience that allowed me to recognize Puppet as the next big thing. Puppet’s main innovation was a domain specific language (DSL) for “modeling” (to model is the correct verb in automation speak) systems and environments. What I was most excited for was Puppet’s applicability to network devices. I had grand visions of using Puppet to command and control large scale network environments as if they were puppets and I was the puppet master pulling each of their strings. Need to use a new NTP server? Make one config change and then allow the network to sync. (Puppet in those days was a pull-based system.) My main focus at the time was getting Puppet to work on F5 BigIP devices, and my interactions with the systems engineers as a customer of F5’s allowed me to go on to work in Sales and Product Management at F5 and NGINX. In 2017, while working at F5 I was focused on Ansible and socializing it as a platform to do exactly what I was trying to do in 2007. Between 2007 and 2017, F5 had created an API for managing its devices. This gave me and Ansible the opportunity to command and control them. Which is where the Ansible name comes from.

Andrew “Ender” Wiggin is Orson Scott Card’s fictional character who uses the faster-than-light hyper-speed technology Ansible to command and control all the starships in the galaxy. He uses this Ansible technology to defeat the buggers in the climactic conclusion of the first book. He essentially swarms all the starships in an offensive and defensive pattern and can create an attack opening to destroy all of his opponents. Similar to the Puppet metaphor, the Ansible metaphor is all about “command and control”. I have been keenly aware of the need for command and control since the beginning of my career working as a contractor for the FAA. A 1988 NATO definition is that “command and control is the exercise of authority and direction by a properly designated individual over assigned resources in the accomplishment of a common goal”. 

Automation that builds on command and control requires systems thinking. It is the most critical aspect of an automation initiative or approach. Unfortunately systems thinking can be a bit slow and “academic”. My reaction to this has been to speed up systems thinking which I have been developing into the “Fast Systems Thinking” approach.  Part of this work has been to search out the fastest systems thinkers in the world, which allowed me to discover the Air Force Combat Controllers. They are the most highly trained and elite special forces team in the military and are attached to other special forces squads and provide tactical air support, including battlefield air support. Their motto is “First There,” and they are literally the first and last soldiers to enter and exit a battlefield. Examples of their fast systems thinking include stacking up 12 or more airplanes, all traveling at different speeds and altitudes, and targeting a bomb target where all 12+ planes drop their bombs on a target the size of a pickle barrel in a column, thereby hammering the target with the payloads of all 12+ aircraft. In addition to the significant advantage, they provide in the battlefield, they are well known for their humanitarian efforts during and after a crisis. They are trained paratroopers and certified FAA air traffic controllers and will parachute into a humanitarian crisis and reestablish air traffic control during a crisis that requires it. I literally can’t think of any group that better personifies my vision for automation and systems thinking and/or better personifies Ender from Ender’s Game. 

My third quake technology was GitHub Actions. When I first started at GitHub, I heard it described as a “Dream Engine”, which I loved. Essentially, if you can dream it you can build it. What makes it so dreamy? The event system that it is built on and its focus is on containers as the atomic unit of work. Since F5, I have been focused on CI/CD, having worked on both Jenkins and Spinnaker at CloudBees and Armory.io. Much like my experience with Puppet, I was able to notice and realize the power of GitHub Actions upon first encountering them. High-quality eventing has been missing from my designs since I started my automation journey in 2007. Events allow you to take action based on activity and state downstream — and even upstream — in your workflow. Gartner’s definition of hypterautomation talks about eventing, and I would argue that it is the most important component of hyperautomation design. I am socializing and have been working with a close friend on a new type of domain-driven design focused on eventing. 

GitHub Actions currently provides 37 events to command and control your workflows. This ranges from a git commit or pull request. To a scheduled cron like event that fires every N hours. To even the ability to watch for a certain label or issue comment and then take action based on it. The most powerful is the gh cli and its ability to communicate with GitHub Actions and fire off a workflow on demand. This provides a great deal of command and control capability and allows an automator to chain together different systems as long as they support installing and running the gh cli command. To be clear this gh cli command is interacting with GitHub’s api and the same can be orchestrated with API tooling such as Postman or the language of your choice. 

I am using this event system and the containerized workflows to run Terraform/Postman/Cloudflare Wrangler API calls against Cloudflare as a demonstration and workshop at ONUG Spring 2023. My work is called GitNops, and is supposed to be a workflow similar to the Kubernetes focused GitOps. In this workshop, participants will learn DevOps best practices and principles, such as:

  • Configuration Management with Git and GitHub
  • Continuous Integration of the Terraform HCL and Postman JSON
  • Automated Testing of Terraform code with the Super-Linter and security scanner Snyk
  • Infrastructure as Code modeled through Terraform HCL and the Postman UI and CLI
  • Continuous Delivery to an on-premise data center’s management LAN with the Kubernetes-based GitHub Actions Runner Controller 
  • Continuous Deployment of network engineering techniques and how to implement them
  • Continuous Monitoring concepts for a network engineering workflow management system such as GitNops and GitZT

The key thing to keep in mind and reflect on is the eventing that drives the whole workflow in GitNops and GitHub Actions. It provides a UNIX like design philosophy where composable programs are able to communicate and contract with each other on guarantees of action or activity. I would argue that this is as equally powerful as UNIX or Puppet itself. GitNops and Fast Systems Thinking are the culmination of my life’s work. I have and will continue to work hard at making the tutorial at ONUG the very best and most interesting it can be. 

I look forward to seeing you in Dallas!

Sincerely and appreciatively yours,

Thomas A. McGonagle

Author's Bio

Thomas McGonagle