Edge computing has been part of the emerging technology landscape for coming up on five years now, yet somehow, it remains somewhat of a mystery (or possibly just a Byzantine maze).
The OpenInfra Edge Computing Group has been working towards understanding what edge computing is and what it is not. To demonstrate the crux of the issue, according to the telecoms and their analysts, it is a way of delivering low latency (something that telecoms will tell you is REALLY important) to applications such as some Industrial IoT (autonomous vehicles, UAV), AR/VR, gaming, and telematics applications; these use cases are clearly dependent on low latency for their performance. On the other hand, another way of looking at edge is use cases such as video processing, AI/ML, IoT monitoring for maintenance and diagnostics, where the objective is to use limited bandwidth capacity more effectively by shifting information processing closer to where the data is generated (THE edge).
Since defining what edge is exactly borders on the futile, that does not mean that there is not plenty to talk about. The group stepped away from creating a definition, as in our experience pinpointing what industry and analysts mean when they talk about edge computing is rather elusive. Far more interesting edge topics are what is happening to edge adoption as the technology, use cases and tooling mature. The working group will be sharing our findings and thoughts on some of the more knotty edge challenges that we have encountered, and with that, we begin on a journey to untangle the edge and invite you to join us.
The OpenInfra Edge Computing Group will be publishing a series of blog posts to address some of the thornier aspects of edge computing as it enters the mainstream adoption cycle. We will focus on the far more interesting discussion of the technology and architectures needed to support edge workloads—in whatever form or criteria is required for the given use case. There is no question that there are many edge use cases, ranging from content delivery to tracking data from IoT devices. However, the deeper into the solutions you get, the more it becomes apparent that there are multiple edge related efforts each with their own definitions, architectures and requirements. That does not mean that there are no common criteria and features, it just shows that you cannot talk about edge without context to provide a clear picture of the use case and requirements to avoid ending up on a slippery slope as they constantly evolve along with the industry and ecosystem around them.
Some common edge considerations that will be explored in more depth include:
- Workload latency (to end users) – This metric can vary from under two milliseconds to orders of magnitude higher. This criteria is often most touted by telecom operators as being the most important factor to consider, but that is more a function of their strong network bias than any requirement for any given edge use case. Yes, there are use cases that are heavily dependent on low latency and they are worthy of more exploration.
- Network transport considerations – How does the underlying transport affect use cases and the thinking about edge? This is a rich area to dig into as it has been somewhat overlooked. In short, it is much harder to actually manage in real life so to speak.
- Edge should be transport agnostic, so why is it not?
- MEC/Edge/5G are not the same thing at all, so how did they get so confused?
- Where are my edge workloads? — Where are the workloads and how are they moved around on the edge and between the edge and center—some workloads will need to be located at the edge for reasons of reducing backhaul and latency to varying degrees, while others can be moved between edge and center to offload. Determining the best option for a given use case can be tricky.
- How do constraints affect edge architectures? – One major criteria, which is often overlooked is that edge, by its very nature, is defined by the constraints set by its environment and infrastructure.
- Edge infrastructure and platform issues — There is wide variability in infrastructure specs, hardware and tooling. Some factors that affect deployments include:
- Node size: from single CPU to multiple racks (e.g., hyperscalers)
- Does edge require special hardware, such as accelerators?
- Does edge work when multi-clouds are required? – Can the edge support multiple platforms or does that quickly become a nightmare of mismatched tools and bad integration?
- Day 2 operations of edge deployments – When a deployment can range from 10s of sites to thousands or even millions if you count IoT devices as edge, how does deployment scale affect architecture, management and application decisions?
About OpenInfra Edge Computing Group:
The Edge Computing Group is a working group comprised of architects and engineers across large enterprises, telecoms and technology vendors working to define and advance edge cloud computing. The focus is open infrastructure technologies, not exclusive to OpenStack.
Get Involved and Join Our Discussions:
- Weekly Meetings
- Join the Mailing List
- Cloud Edge Computing: Beyond the Data Center White Paper
- Edge Computing: Next Steps in Architecture, Design and Testing
- Tangled Up in Edge – A Blog by the OpenInfra Edge Computing Group - June 28, 2021
- How to land a successful OpenStack Summit talk - January 5, 2016