Application Architectures: It’s been a journey

ONUG Cloud Native Security Working Group Blog Series #1

Over the next several quarters, the ONUG Cloud Native Security Working Group will be publishing a set of short articles that examine different aspects of modern application security –new threats, the role of big data and machine learning in addressing those threats, how security interacts with the CI/CD development process, and more.

The discussion of how to protect today’s applications begins with understanding how applications are built.  And understanding modern application architectures requires us to appreciate the journey that has led to today’s design patterns, recognizing the technical forces that have driven the evolution. This first article is therefore an overview of that journey, along with glimpses into the security implications – many of which will be discussed in greater detail in later articles.

Phase 1: The 3-Tier “Application”/Physical Data Center Era

Our journey begins about 20 or so years ago – a timeframe in which the digital delivery of business services greatly accelerated.  At that time, “applications” were developed to allow the most critical customer-facing business workflows to be consumed digitally. The applications were hosted in privately owned data centers and constructed using a 3-tier design pattern, with the tiers being the web interface, middleware/business logic, and database.  All of the software for these tiers was executed on dedicated physical servers in the private data center, using physical network connectivity. All of the development and operation of the application was done by the business that owned the application.  Network layer access controls gated external access to the application by controlling which servers were exposed to external clients – typically it was only the set of web tier servers.

In summary, this environment was characterized as:

  • Highly Stable: All software and hardware components were developed, operated, and maintained by the enterprise’s employees, located in-house, with controlled and infrequent changes.  Additionally, software was developed using a waterfall methodology, again resulting in controlled and infrequent changes in the production environment.
  • Application Consumers were almost always Humans: The users of the software were predominantly humans, not other software or devices.  Therefore, the application consumer was typically identified and authorized via username/password, and the authentication database was localized to, and established within, the context of the enterprise. More specifically, all AAA (Authentication/Authorization/Accounting) functions were handled entirely within the domain of the enterprise that ran the application.
  • Physically Partitioned: And, finally, both the compute and network infrastructures were physical and dedicated – such as specialized computer servers with static IP addresses allocated from a bespoke subnet – physically partitioned by role in the application.

Application security in this environment was relatively simple and straightforward.   Because of the physical partitioning of servers and IP addresses, access control was performed using a network-layer (L4) firewall.   External access could be limited to the web tier; more sophisticated security practitioners implemented additional “cross-tier” access control, again using simple, static L4 firewall rules.   Because all of the application customers were human, authentication and authorization could be performed using a simple username/password directory service, such as LDAP or AD, which was sufficient to meet the AAA/identity needs.  Additionally, because the entire infrastructure was operated locally, no special security solution was needed to track changes by external entities.  Lastly, because of the relatively stable software and hardware environment, changes were infrequent, allowing for a structured process with human oversight for any modifications to the security policy.

Phase 2: The Clustered Application/Virtual Data Center Era

The privately-operated data center, attractive in its (relative) simplicity to secure, has now been largely superseded by virtualized, often outsourced, data centers because of business considerations.

On the financial front, the capital expense of owning a physical data center, usually sized for the worst case and therefore typically 3-4x overprovisioned for the average case, was a large concern.  A new model – pay for what you use” – addressed businesses’ scalability needs by replacing the large up-front CapEx cost with an amortizable OpEx pricing scheme.  An additional financial concern was the high cost of both IT and software development.  On the IT front, the aforementioned “pay for what you use” model was coupled to “…and we (the data center landlord) will manage the infrastructure for you as well.”  The result of these developments was the virtual data center, delivered as a service. And thus was born the public cloud/IaaS (Infrastructure-as-a-Service).  An additional notable concurrent trend on the software development side was the use of open-source components, which greatly reduced development costs and time.

The other business driver – business agility – manifested as a software development methodology centered on a CI/CD pipeline, fed by agile software development practices.  This enabled new functionality to be brought to market more quickly and be delivered continuously.  At the same time, from the operational perspective, auto-scaling compute infrastructure coupled to orchestrate-able network overlays, such as Docker on Kubernetes, were developed and deployed.  These infrastructure platforms enabled OpEx agility, in the sense that incremental business OpEx expenditures could be dynamically adjusted based on the instantaneous customer demand, effectively matching business cost with business return in real-time.

The increased use of orchestration and scaling that came with IaaS gave rise to two additional notable technical trends.  First, as application component orchestration became more dynamic, it also became more automated, resulting in more machine-to-machine workflows.  Second, this very same acceleration of software modules becoming “customers” of application functionality, resulted in APIs evolving into a dominant – often primary – method of interfacing with application components.

In contrast with the private, physical data center paradigm, the application architecture strategy of this era can be characterized as:

  • Highly Dynamic: Within the scope of the virtual data center and/or public cloud, applications and their infrastructure are far more dynamic. Software development follows a CI/CD model, with code changes being rolled out continuously.  From the operational perspective, instances of Kubernetes-managed containers are created or destroyed in real-time, based on demand, affecting not merely the (virtualized) servers, but also the network architecture, with IP addresses being dynamically assigned and reassigned.
  • Mix of both human and non-human application consumers: The increased prevalence of automated orchestration coupled with the increased prevalence of software, in addition to humans as application consumers, challenges the notion of what an application consumer
  • Virtual, not physical, infrastructure: Both the compute and the network infrastructure are virtualized, resulting in the structure of both being much more ephemeral.
  • Less Direct Control: The application owner direct control is diminished, not only at the infrastructure level, but also at the software development level, given the heavy use of third-party open-source software packages.
  • API-Driven: Because of the need to accommodate orchestration systems and non-human users, programmatic APIs, rather than human-centric command line interfaces or graphical interfaces, are now a primary method of communicating with the application.

The combination of these developments has introduced significant new security challenges, arising from the more complex and nuanced environment.  Some of the most notable security implications are:

  • First, a need for a richer notion of the identity of an application consumer.
  • Second, an additional layer of abstraction is needed when referring to an entirely virtual set of infrastructure elements.
  • Third, the need for a means to deal with risks from third-party application components.
  • Fourth, security controls must now address not only the server and network components but must expand to address other cloud service components, including extending to the exposed API surfaces on an application.
  • Finally, automation – augmented by machine learning – is required to quickly detect and mitigate highly mobile and agile threats that would otherwise be missed because of complex and dynamic application behavior

As mentioned earlier, a fuller discussion of security countermeasures will be addressed in later articles.  However, we should note that many of the current security hot topics and mitigation strategies have been formed in response to precisely these challenges:

  • Zero-Trust to address non-human actor access control;
  • SAST/DAST vulnerability scanning and assessment in response to managing risk from third-party components;
  • Shift-left practices and API security gateways for bridging the “abstraction gap” as infrastructure becomes more virtual and interfaces become more programmatic;
  • The emergence of AISecOps highlighting the role of automated detection and response, including the adoption of machine learning (ML) technologies in new, increasingly dynamic application infrastructure environment

Phase 3: The Distributed “Application”/Smart Edge Era

The prior “Phase” – clustered applications running in the public cloud – is where the majority of applications are today in the context of the application evolution journey.  However, forward-thinking application security practitioners should consider the next emergent trend in application architectures: the move from applications built as tightly coupled components running within a single homogeneous infrastructure platform (e.g., Docker containers in a single Kubernetes cluster) toward a new paradigm of more distributed “composed” applications running across multiple heterogenous infrastructures.

In this model, the delivery of overall application functionality is now realized from the interactions of a “composition” of independent functional elements, each performing a specialized service, deployed across a diverse set of platforms, form-factors, and locations – all linked via persistent APIs.  The choice of location and platform for each element is based on the application’s requirement from the service that element is providing. For example, an Internet of Things (IoT) application, such as a smart car, may choose to perform some latency sensitive functions at an “edge” compute location, while performing big-data analysis at a different, more remote location that consolidates more telemetry. Additionally, some loosely coupled infrastructure functions, such as authentication and large-scale data retention, may be completely abstracted – with no notion of a server – and only exposed as a remote service presenting a black-box RESTful API.

A full discussion of the factors driving this transformation could be a topic unto itself, but the high-level drivers are (a) the digitalization of more and more business processes (aka “Digital Transformation”), and (b) the transition of many traditionally physical consumer products into the digital realm (aka consumer IoT devices), resulting in a more diverse set of both functional and performance requirements.

Using the same approach as employed earlier, this phase can be characterized as similar to the prior phase, but with several of the trends amplified:

  • Highly dynamic with a virtual infrastructure: Now, not only are compute and networking infrastructure virtualized, but functionality such as storage, identity, secrets, and others are presented as server-less, exposed using a Function-as-a-Service  This results in
  • Even less direct control: One notable trend is application services that are explicitly presented as a black-box, logically running outside of the application’s infrastructure, and therefore offering very little infrastructure control to the application infrastructure owner. Typically, these Function-as-a-Service application elements expose only a RESTful API, a consequence of which is…
  • Even more non-human consumers: The Function-as-a-Service model is intended to be consumed by high-level software running business logic, not humans. One consequence is that the scope of functionality a service provides is shrinking, which is manifest in the transition from services to microservices.  Another implication of non-human consumers and fine-grained microservices is the primacy of APIs as the control and visibility surfaces within an application.
  • Finally, the Application is Distributed, across not just the public cloud, but also across delivery edges (5G or metro area networks) and enhanced CDNs (Content Delivery Networks) with embedded compute.

Just as the characterizing trends are magnified in this next phase of application development, so too are the security requirements.  At a superficial level, the implications are an even richer notion of identity – what it means, how it is scoped, and its temporal considerations – resulting in an increased emphasis on digital signatures and infrastructure to manage their lifecycle.  It also entails a larger threat surface because of the increased number and diversity of components and underlying infrastructures that compose an application. The breadth of components and infrastructures also require a proportionally larger, and more robust, data collection and normalization infrastructure to accommodate the more central role that ML-driven and ML-assisted security workflows play.

At a deeper level, the distributed model also brings with it a more foundational sea change; namely, there is now no server-side centralized choke point.  Previous architecture patterns – data center, virtual data center, or public cloud – all had a single logical point of ingress, where incoming traffic could be inspected and dispatched accordingly.  Now, with different aspects of a single logical application being delivered from not just a single public cloud, but also from an edge location or a CDN, or abstracted via Function-as-a-Service (often running in a different cloud), no single public cloud has a holistic view of the application.  The security response to this challenge is to either fully distribute the data collection and remediation mechanism, or to find a new choke point. This will be the topic of a future article.

Closing Thoughts

Applications, what drives them, and how we build them, have evolved over the past two decades. In the early days, when applications were slowly changing software monoliths running on bespoke hardware, security requirements were relatively straightforward, and could be handled by a small team of skilled human operators.  Applications evolved to address more use-cases, adapt to new requirements more quickly, and operate at lower cost – all good things – though with the side effects of decreased centralized control, less fine-grain human oversight, and more leveraging of external capabilities.  As today’s application security practitioners grapple with the implications of these side-effects, an evolved way of thinking about protection is also required; one that emphasizes the role of automation for security, understands that application consumers can be either human or machine, and considers security across the network, storage, and API layers.  This modern view of application security must also marry all of these building blocks to data collection and ML-assisted analysis to find and remediate the diverse and agile threats of today and tomorrow.  Deeper dives into how to approach application security solutions in this framework will be the topics of ONUG’s ongoing cloud native application security article series.

Author's Bio

Ken Arora

Sr. Architect, Office of the CTO, Security Products

F5