There’s a lot of talk about the ‘edge’ these days—edge compute, edge cloud, near edge, far edge, the list goes on.
We see these as just different approaches to ensure that enterprise employees have ‘always-on’ access to the data, content, and applications they need to do their jobs—regardless of their physical location. These edge approaches also deliver a higher-quality customer experience and meet application specific latency and bandwidth needs.
For over a decade, enterprises accomplished this task by moving many applications into public, private, and hybrid cloud compute environments. But this traditionally involved a cloud approach wherein enterprise compute and storage applications were hosted in far-off public data centers. Remote enterprise users accessed these applications and content through network connectivity services aggregated through headquarters or the company data center.
Three key trends are driving the need to move applications, content, and data closer to enterprise users:
Enterprises are in different stages of their cloud evolutions. The data-user requirements, application demands, and stage of evolution determine customized solutions for a given enterprise.
Some are just looking to optimize the way they connect their dispersed branch locations to applications in a traditional cloud compute environment. In this scenario, they’re replacing static network connectivity and security with Software-Defined WAN (SD-WAN) that enables them to optimize and secure network traffic across various underlying connectivity types, including 5G.
In addition, they’re replacing legacy physical network devices, such as routers, firewalls, and load balancers, in each location with Virtual Network Functions (VNFs) or Containerized Network Functions (CNFs) hosted on a single edge device that combines a virtual connectivity switch with x86 compute server modules.
Other enterprises are looking to expand this concept to include virtualized business applications like point of sale, inventory, video, analytics, and AI, on a single higher capacity edge compute device either on premises or in a local micro data center.
Some enterprises are taking a further step by adding multi-tenant, multi-cloud integration capabilities hosted on the single device and enabled by a robust software stack, which enables each branch, work-from-home, or mobile worker to access applications from a single edge device.
There are many benefits of this type of ‘virtualized edge solution’ to the enterprise, including:
A small percentage of enterprises will choose to deploy and manage this ‘virtualized edge infrastructure’ in each remote branch location or local micro data center hub. Most enterprises would rather purchase it on an ‘as-a-service’ basis from a Communications Service Provider (CSP), a Managed Service Provider (MSP), or Cloud Service Provider. Regardless of which ‘cloud-native’ strategy an enterprise chooses to follow, there are a few key practices that can make the journey simpler, less costly, and less risky.
The evolution to a multi-service virtualized edge cloud environment is often not a simple task for either an enterprise or a service provider. Migrating from a rigid hardware-centric environment to an agile, software-based, disaggregated environment can be a complex endeavor, especially when there are multiple VNFs, new containerized applications, and multi-tenant cloud services from a variety of suppliers involved. Enterprises will either need to ensure in-house expertise or seek an experienced systems integration partner to create and validate the solution design, conduct the staging and site deployment, and perform any required lifecycle management or managed operations to achieve the full benefits of a multi-service virtualized edge cloud solution.