Info Image

Edge Computing Trends: 5G, AI and Cloud

Edge Computing Trends: 5G, AI and Cloud Image Credit: Elnur/Bigstockphoto.com

Defining edge computing

At the highest level, Edge computing is the deployment of modern computing resources supporting cloud native models of software lifecycle management nearer to where data from a broad range of sensors is created. There are a range of reasons to deploy an Edge network, including addressing system latency, privacy, cost and resiliency challenges that a pure cloud computing approach cannot address.

While individual views may differ slightly, the following tenets are true of Edge systems:

1. Edge devices must process and derive insight from multiple streams of data.

2. The cloud is here to stay. There will be a hybrid model in which some system functionality will be implemented in the cloud and some will be implemented at the Edge in a factory, on an oil rig, in the fields, etc.

3. This type of platform must be more than a server blade placed in an enterprise. As Cisco noted in its original work around Fog Computing, systems at the border of the physical world and IT worlds must support important attributes such as guaranteeing real-time responses to specific situations, being incredibly secure as well as infallibly safe. Think of a traffic light; when it fails, it defaults to a flashing red light that drivers know how to react to. The industry needs to deliver that same deterministic behavior for cars, robots, factory machinery and the like.

4. Over time, where Edge data is actually processed could vary at any given time based on compute costs, network availability, the situation and other factors.

Like many buzzwords bandied about by the technology industry, Edge computing has been used everywhere, and what makes it even more challenging is that this functionality could be implemented on a wide range of devices - a smartphone could be an edge device, as a car, drone, industrial robot could be an edge device, etc.

An emerging category in this area is what Lynx refers to as the Mission Critical Edge. The Mission Critical Edge was born out of the incorporation of requirements typical of embedded computing -- security, real-time and safe, deterministic behaviors -- into modern networked, virtualized, containerized, lifecycle managed, data-rich computing.

5G, AI and cloud

The arrival of 5G will have a huge impact on Edge computing. 5G promises the ability to connect machines and modern computing in a more deterministic manner, essentially guaranteeing real-time behavior. This means a robot working in close proximity with a human won’t injure the worker with any action it takes. With 5G, we will get the higher bandwidth and latency that other networks haven’t been able to bring, but which are needed for machines to make real-time decisions at the Edge. And, unlike Wi-Fi, the number of communications happening on a spectrum will be limited for better reliability and will remove the need to hardwire factories or other settings.

Edge computing will have a big impact on the development of AI. Currently, AI training produces vast volumes of data that are almost exclusively implemented and stored in the cloud. Placing compute at the Edge looks for patterns locally, on the device that captured the data. Over time, this can evolve training models to become simpler and more effective. For example, car manufacturers have dramatically improved their quality-control processes through inference at the Edge because they are able to catch defects in real-time, before a new car model is put into full scale production. Similarly, when you run digital twins in parallel at the Edge, you enable one model to learn and apply something in real-time from the other, improving the quality, reliability and addition of new functionality to real world systems.

Edge computing should be used to complement cloud computing. The cloud’s big data storage capability is impressive, enabling data management and visibility from multiple distributed deployments. That said, there are several reasons to want to keep some data local, starting with privacy. People and organizations don’t necessarily trust that big cloud companies will use their data in a well-meaning way. Consider the smart city data collected on a daily basis - local analysis at the Edge can filter unimportant data, sending only necessary, anonymized data to the Cloud. In this scenario, citizens can be assured they are not being tracked, while cities can use data in the Cloud to determine traffic patterns and other important metrics. Keeping data at the Edge also helps with cost. The final and most important reason, however, is safety. The Cloud cannot be used to control moving data that’s needed for quick decisions. Cloud computing is great for predictive data analytics, but ultra-quick decisions require rich data to be processed locally.

Emerging trends at the edge  

In the first phase of deployments, Edge computing often simply comprised a standard server blade deployed at a remote  facility. This will evolve to optimized-for-purpose hardware. Typically, this will incorporate very powerful processors running a number of different workloads concurrently. Think of the traditional, highly virtualized computing environment of the cloud coming to hardware that fits inside the palm of your hand ... and that ensures applications are isolated from each other, that one application cannot impact another and that the functions that simply must operate in a responsive, deterministic way always do so. Mission-Critical Edge so far has been missing from the Edge computing movement, however, it quickly will become the full manifestation of the technology, given that critical safety and security applications are growing in importance across a variety of industries, especially in autonomous vehicles, smart cities and manufacturing. Without the Mission-Critical Edge, many deployments in these industries would be flaky and therefore not scalable.

How can companies use this technology?

In the post-COVID world, the market for robots will grow quickly as they represent  the only way to ensure separation for humans. Today, robots have been deployed and assigned a specific task on factory floors, where humans have largely been banished. The reality is that the path to improving the efficiencies of factories is to harness the best of humans and robots. This means for them to operate on projects together. For this to become a reality, the Mission-Critical Edge is necessary as Cobots need much closer control for real-time implementation of complex decisions in co-working environments. This decision-making must be made by compute functionality in the robot in order to achieve the speed and latency, coupled with the consequences of getting a decision wrong.

NEW REPORT:
Next-Gen DPI for ZTNA: Advanced Traffic Detection for Real-Time Identity and Context Awareness
Author

Flavio Bonomi, Board Advisor to Lynx Software Technologies, is a visionary, entrepreneur, and technologist who thrives at the boundary between applied research and advanced technology commercialization. He works closely with Lynx to identify potential new markets for Lynx technologies.

PREVIOUS POST

3 Reasons Why UWB Is the Future

NEXT POST

21 Predictions for Mobile and Wireless in 2021