Info Image

Network Intelligence using Edge Computing

Network Intelligence using Edge Computing Image Credit: ilixe48/BigStockPhoto.com

Increase in deployments of network devices, new protocols and technologies due to spike in network demands for higher bandwidths, speeds and reliability has created complex work-flows for internet service providers (ISPs) and communication service providers (CSPs). Managing manual tasks for designing, ordering, fulfilling and assuring end-user services of these networks have become herculean task for them. To be able to manage their networks seamlessly amid rising complexities of problems, achieving Zero Touch Operation through Network and Service Automation is the only way forward.

Combining artificial intelligence and machine learning enables measuring and detecting the factors affecting Quality of Experience (QoE) of the applications to create dynamic networks that can adapt to the demands which otherwise is impossible with manual operations. It requires massive data processing to understand application demands. To achieve the desired real-time responses and decisions through centralized decision making brings in the need for a massive computing and bandwidth.

Pushing the portion of computing, processing and inferences to edge devices within the networks is the only way to achieve a cost effective and faster response systems which can adapt in real-time to the demands of the end-user applications.

Advantages of deploying edge intelligence in network intelligence

Distributed Intelligence: Network Intelligence platforms analyze patterns, predict outcomes, detect anomalies and perform proactive corrections. The typical methods consume humongous compute resources and cost as it requires churning data on high end GPUs using machine learning models and artificial intelligence algorithms. Distributing the analysis and running machine learning models across millions of Edge devices can greatly help to reduce cloud compute costs and faster decisions.

Real-time decisions:The ability to achieve localized decisions and faster responses is another advantage with edge computing. Raw data can quickly be processed at the edge and patterns can be deduced in real-time to detect anomalies within the network.

Reduced cloud storage: Since most of the data can be processed on the edge, the data pushed to the cloud can be regulated to a larger extent reducing cloud storage and cost.

Data privacy: – Data privacy has been a major concern in the present times. Since it is necessary to process the data used by the user’s personal IoT devices like Mobile and Tablets and due to the possibilities of data compromise when stored in the cloud, users get uncomfortable sharing raw data to the cloud. Edge computing eliminates the need to store data in the cloud by making it possible to analyse raw data at the edge. Data privacy can be ensured by uploading only “post processed” analysed meta-data which has pruned off any private user information for further processing.

Distributed computing – The process that requires huge computing in the cloud could be distributed to the edge by breaking the job into smaller chunks. This helps in consuming the unutilized computing power of the edge and in parallel offloads a lot of storage from the cloud.

Swift Feedbacks: Feedbacks post a configuration change on network devices are quite important to measure, monitor and achieve desired Service Level Assurance (SLAs) in a network system. Often a lot of statistical data from the edge has to be sent to the cloud to get the desired feedback of how well the SLAs in the networks are met. Offloading this to the edge provides immediate and quick feedback, which can help to take necessary actions on time averting failures within the network.

Design consideration

The biggest challenge with Edge computing is the availability of resources. Edge computing requires sharing resources such as CPU and RAM with native edge applications. It is also important to make sure the performance and services of these native applications are not compromised.

To ensure this, the edge software should be able to use only unused compute and resources which are available in the user’s edge or use the resources when the native applications are idle. The key success for edge computing is to correctly measure how much of those compute and memory can be wielded without disturbing the functionality of the edge device.

Business case for Edge success

  1. Edge software should be scalable both horizontally and vertically across domains. The Software models developed on a Network Element should be generic enough to be provisioned to various other Network elements within the network. For example, to measure the Quality of Experience (QoE) of a network it is necessary to know the QoE achieved by each network element such as Switch/Routers/OLTs etc... Designing a QoE measurement system for each of these network elements and coordinating them across the networks may not be a scalable model. Instead, Edge Software models should be designed in such a way that it can be used across multiple network devices catering to multiple business verticals.
  2. Understanding the limitation of edge devices in the user’s network ensures that the edge compute model is feasible for the business. For example, IoT devices have a limited compute. Given the limited resources available, running pieces of software within these IoT devices may not be the right approach to achieve Edge Analytics. Instead, Edge computing can be performed on a controller deployed in a network where all of these IoT devices' connections are terminated and a better compute and memory is available. Making software design decisions without being aware of these constraints will have detrimental effects on business prospects.

Requirements to be met while processing data in the edge

Stats, Events and Patterns being three types of data, it is important to understand how much of each type of data will be stored in the cloud and how much will need to be analyzed at the edge to conclude an outcome. This requires deep domain expertise.  Although edge computing is limited by the set of requirements mentioned below must be met during data processing:

  1. Critical data must be gathered
  2. Measures to be taken to ensure that decisions are faster. Higher samples result in slower processing.
  3. Feedback system - Post decision feedback is necessary for tuning the Machine Learning algorithms in the cloud and on the edge.
  4. “Device Agnostic Data” extraction must be considered in a multi-vendor, multi-technology, multi-standard and multi-cloud environment on which these network edge devices operate.

Future of edge computing

Edge computing will become a prerequisite across multiple verticals. Verticals such as Block chain, augmented reality and VR demands extremely low latency and higher reliability. Future edge devices will likely be equipped with machine learning and artificial intelligence offloading hardware chips as part of their design to meet the needs of Edge computing.

With the introduction of 5G, edge computing will soon become a mandate to achieve the desired immersive experience demanded by the new age cutting edge applications. Communication Service Providers (CSP) and Internet Service Providers (ISPs) are seeking for new revenue streams to scale their businesses. The focus towards the future lies in delivering reliable voice and video conferencing and streaming services which will become all the more important in the future.

In a larger context edge computing will be an enabler for much broader use cases such as within the Internet of Things (IoT) and potentially be bundled with other enterprise offerings such as 5G private networks.

NEW REPORT:
Next-Gen DPI for ZTNA: Advanced Traffic Detection for Real-Time Identity and Context Awareness
Author

Guharajan Sivakumar, cofounder of Aprecomm is the company’s in-house think tank and has envisioned Aprecomm’s foundation of bringing intelligent stack to network devices combining Machine Learning and wireless experience. In his 17 years of service, Guharajan has founded and transformed multiple businesses in the technology space, with distinguished roles in senior leadership and engineering. At his last venture, he built a telepresence company that aimed to deliver high-definition immersive video conferencing experience. He has also co-authored several patents around building scalable WiFi networks for high-density environments.

PREVIOUS POST

Push to Eliminate 'Digital Poverty' to Drive Demand for Satellite-Powered Broadband Connectivity Post Pandemic