Overview on State of The Edge Report 2022 Published by LF Edge
July 15, 2022
The pandemic forced the development of new technologies made possible by edge computing for remote monitoring, provisioning, repair, and management. COVID-19 highlighted that expertise in legacy data centers may become outdated in the next years.
The adoption and implementation of cloud-native, containerized, and distributed applications are being accelerated by open source hardware and software projects, which are fueling innovation at the edge. According to a report released today by the LF Edge division of the Linux Foundation, a struggle for dominance over edge computing settings is predicted. This report aids in understanding the edge computing ecosystem by looking at three critical significant areas: connectivity, location, and application infrastructure have evolved into open source edge computing’s three main topics for 2022.
Each of these three is essential to the advancement of edge computing. It examines the ins and outs of translating cloud-native principles of application development and infrastructure management to deploying and running software at the edge, as well as new physical locations where compute infrastructure is being deployed to meet the need for ever more distributed platforms, including ground and in Earth’s orbit.
Coredge has joined the CNCF and Linux Foundation as a Silver Member and presents their perspective on edge computing report to provide additional information and understanding about the market today. With connection, location, and application infrastructure as our three key guiding concepts for this year, let’s examine the existing edge computing ecosystem.
Let us discuss a few key findings:
- Currently, 5G infrastructure deployments, which frequently include edge computing components, and retail requirements like inventory management and in-store security are the only major use cases for edge computing in rural areas.
- Due to connection issues in these areas, it has been challenging to bring edge computing to rural communities.
- Due to the pandemic’s stark revelation of the human cost of inadequate or non-existent internet access and the changing economics of that connectivity, there is more interest than ever in providing connectivity in rural areas. Now, the idea of connecting hundreds of thousands or perhaps millions of devices utilized in IoT applications outpaces the restrictions of tiny customer bases for rural broadband.
- In addition, constellations of satellites in low earth orbit promise to make satellite internet connectivity faster, less expensive, and more dependable. These developments are made possible in part by the growing private space industry. For locations on the edge that are otherwise inaccessible, that connectivity might be the best choice.
- Mobile operators are looking into the possibility of using satellites as mobile phone towers as a means of supplying the greater number of base stations required for 5G and beyond. But over the next ten years, satellites might also develop into computation platforms, with tiny data centers in orbit handling workloads transmitted from edge points on the ground and processing data gathered in space.
- Data centers in space are likely to be commercially viable in five to ten years, at least in a limited capacity, despite the complexity and obstacles, which are vast even with terrestrial edge equipment.
- The advantages of centralization now come with some evident limitations in terms of resilience, redundancy, performance, and regulation as the internet infrastructure landscape continues to develop. A new wave of investment and development activity outside of the conventional global tier 1 markets has been stimulated by these causes.
- The goal of hyperscalers is to move to a more dispersed deployment architecture after years of investing in the development of centralized infrastructure footprints across a small number of key global hubs. In all geographical areas, this is causing the establishment of new data centers and network hubs.
- While Kubernetes (or at least the Kubernetes API) has become the standard solution for container orchestration at the data center scale, orchestration is more difficult at the edge. Finding a method to abstract non-OS and non-architecture-specific activities is crucial since they are much more heterogeneous at the edge than they are in the cloud.
- As more (and more demanding) workloads have shown to be compatible with public cloud infrastructure, what is frequently left are workloads that are best suited for the edge.
- The cloud provider model is expected to expand in the future, becoming a fabric of available compute resources that may be used as needed.
- Even if some cloud edge infrastructure may be present in on-premises data centers, more of it will be found in new edge data centers, incorporated in edge devices, or even integrated directly into the telecom infrastructure.
Researchers and industry specialists from the computing and communications sectors have been working on this problem for more than a decade, but it has not yet been resolved, not because the problem is complicated, but rather because the edge is fluid and constructed. The edge is as much about communications as it is about computation, and it is even more about the combination of computation and communications for the distribution of intelligence, whether it takes the form of distributed computation, distributed communication, or distributed computation and communication.
The edge must therefore be resolved jointly by the communication providers (traditionally the telcos) and the computing providers (the cloud), with a sharp focus on consumption and the underlying technology being treated only as interchangeable tools.
Get detailed insights from the report here.