‘Terraform’ in the Realm of Cloud Architecture: The Need for a Well-Defined Workflow

Terraform, a powerful tool for building, changing, and managing cloud infrastructure safely and efficiently, has garnered the undivided attention of the tech fraternity. What started off as a tool to deal with server configuration has gradually transcended to manage the complex infrastructural demands of today’s cloud architectures, evolving as the de-facto standard for expressing such necessities in cloud environments. Kelsey Hightower, a celebrated software engineer known for his advocacy for open-source software, cloud computing, and Kubernetes, adeptly expressed this sentiment during an insightful conversation that triggered reflective thoughts on how crucial it is to formulate a proper workflow system.

As IT leaders, the concept of a ‘platform’ is viewed as the stable construct petrifying after the phase of experimentation has taken its due course. The similarity between the process of software development and constructing a building can help clarify this concept. One doesn’t start adding furniture till the foundation, framework, and roofing are solidified. Similarly, a well-designed platform requires a phase of supportive exploration known as a ‘proof of concept’ (PoC). The stable platform begins to materialize only after the necessities have been ascertained and it’s no longer in the “let’s see if it’s going to work” stage.

However, experiencing frustrations along this path is common, stemming from skipping crucial steps and diving headfirst into creating something stable for the long term, without completing the necessitated groundwork. For instance, the temptation to deploy something brand-new like Apache Kafka – a distributed streaming platform – into Terraform for a PoC round, is merely an accelerated attempt to bypass the much-needed ad-hoc experimentation. This generally results in suboptimal outcomes. At this early stage, technologists should ideally direct their efforts to explore, to make critical errors, and to learn from them in an isolated environment before irrevocably integrating it into production.

A more pragmatic approach demands the creation of a “Version One,” which is a summation of all the decisions derived from tinkering and tuning during the learning phase. At this stage, decisions about capacity, security, access control, and other key factors should already have been ratified, based on the outcomes of the PoC. The resultant stability opens the option for “config” – to consistently construct, expand, and modify infrastructure.

From a managerial perspective, the ideal scenario post this stage would be to have smaller increments of changes – a practice embodying the iterative workflow. As the infrastructural demands evolve, the proposed amendments need to be reflected in the configurations. However, such changes should occur incrementally, in line with the actual necessities. For instance, a slight increase in memory should result in a reflective increase in the config, obeying uniformity.

The idea is to preserve the conversation flow, build consensus, and then allow the configuration to change. Sounds like the holy grail, doesn’t it?

As IT leaders, it’s critical that we delineate the path clearly for our tech teams. We need to provide the scaffolding they need for exploration, allow the lessons to guide the construction, and hone a perpetually iterating, consensus-driven approach, ultimately leading to a stable, compliant, and manageable platform. It’s exactly this kind of workflow that most tech enthusiasts and engineers seek, and exactly what we, as leaders, should strive to cultivate in our organizations.

Author's Bio

Tyson Kunovsky

CEO, AutoCloud