Info Image

A Deep Dive into AI for Wireless Networks

A Deep Dive into AI for Wireless Networks Image Credit: archy13/Bigstockphoto.com

Wireless networks have evolved tremendously from the first GSM SMS two-word message, “Merry Christmas,” in 1992 to today’s expectation of 100 Mbps - with a real desire for 1 Gbps for the average wireless consumer. Over the ensuing years, semiconductor technology has followed Moore’s law to shrink chip features by over a factor of 5000, enabling more complex processing algorithms and wide bands of new RF spectrum. Wireless customers reluctantly tolerated dropped calls when mobile service began because it was better than stopping to find a payphone. Now we expect ubiquitous high-speed data service anywhere, anytime. Today’s lifestyles and thousands of new mobile applications have resulted in technology complexity outpacing conventional signal processing and network management algorithms. Fortunately, AI’s rapid evolution provides solutions which work symbiotically with current wireless standards to address these challenges and offer a clear path to next generation cellular.

The first uses of AI in 5G networks mimic what humans can do - only much faster and at scale. By optimizing the selection of operating bands and scheduling network access time and resources, networks are more efficient and offer a better user experience. For example, to implement voice and image recognition and automation for many new use cases, AI algorithms learn from real-world data rather than explicit human-engineered code (which are deployed with deep learning neural networks inspired by human intelligence and the structure of the human brain).

Similarly, deep artificial neural networks will move closer to a base station’s radio and replace traditional signal processing algorithms which reduce computation loads and improve performance. The term AI-Native is often used to describe this replacement of traditional signal processing with AI (which learns natively from data).

One way to understand the impact of AI on 5G is by how AI will function in 5G virtual RAN (vRAN) and the Open Radio Access Network (O-RAN) architecture. O-RAN is an industry standard for implementing virtualized base stations where much of the processing is implemented in commercial servers instead of custom hardware. O-RAN architecture includes Radio Unit (RU) components and processing deployed at or near the cell site. The RU communicates to a nearby Distributed Unit (DU) for additional signal processing, and then passes digital messages to a Central Unit (CU) which orchestrates multiple DUs and runs protocol processing to complete calls and implement network control.

The most advanced software will soon reside in the DU where it replaces and enhances multiple Layer 1 signal processing algorithms with specifically designed neural networks within low latency DU processing functions. In doing so the AI-enabled DU receiver functions can reduce computational load and server requirements which yields an immediate CapEx savings. Moreover, reduced server hardware lowers power consumption and cooling demand resulting in lower OpEx. It also substantially improves link budget performance in 5G especially at the cell edge and in interference conditions. This increases user call quality in areas with normally poor-quality call quality and can help network planners to extend cell coverage without additional hardware and costly civil infrastructure. AI improved link budget benefits are especially important for greenfield and growth deployments where increased cell coverage provides the operator significant network cost savings.

To drill a little deeper, this advanced software applies a subset of AI called machine learning that learns near-optimal statistical representations from real-world data for a given set of computational resources. Breaking it down even further, traditional signal processing algorithms have fixed mathematical algorithms. Over time, these algorithms have become very complicated, attempting to account for all humanly envisioned signal environments. Machine Learning (ML) on the other hand, looks at real-world data from sources such as the signal quality reported by handsets to the base station to learn the best way to recover data from radio data constrained by the available resources. By using real-world data, ML addresses all manner of signal environmental conditions, equipment imperfections, and required data rates to find the best solution that generates the highest quality customer experience when traditional models did not address them. Furthermore, ML can continuously learn changes in the physical environment to handle the difference in cell site locations: rural, suburban, dense urban, highways, etc. - along with changing call trends occurring during events and time of day. 

A leading feature of ML in a virtualized DU is that performance can be dialed between the optimal signal quality and computational load. For a greenfield deployment with low complexity 4x4 MIMO, the neural network can be expanded to give the best possible performance to increase cell size thereby reducing the number of cell sites. As network densification occurs, the AI neural networks can be reset smaller to reduce computation load. Supporting additional spectrum or higher order MIMO with less server hardware lowers CapEx and OpEx.

AI-Native technology in the virtual RAN baseband dramatically improves 5G, with significant gains in call quality, throughput, and power efficiency. These improvements can be realized today in 5G technology and begin the incremental improvements within 5G advanced.  Going forward with AI-Native technology, further efficiencies can be gained by replacing even more of the rigid OFDM waveform constraints in 5G. AI-Native waveforms are able to learn the best way to carry data over the air across numerous degrees of freedom without these constraints. Looking forward, AI-native learned waveform technology is being investigated incrementally for 5G Advanced and seriously for a “Greenfield” efficient waveform in 6G, with some organizations pioneering the AI path to beyond 5G through both near-term improvements and technology enabling the long-term fundamental AI-transformation of waveforms beyond 5G.

NEW REPORT:
Next-Gen DPI for ZTNA: Advanced Traffic Detection for Real-Time Identity and Context Awareness
Author

James Shea is the co-founder and CEO of DeepSig Inc. As a highly experienced executive with an impressive track record of startups launches and company turnarounds, he brings a wealth of knowledge driving innovation in the commercial wireless, military electronics, and test and measurement markets.

PREVIOUS POST

Future of Cloud: Digital Transformation in a Post-Pandemic World

NEXT POST

5 Trends Proving Data is the Heart of Business Transformation