Info Image

New Paradigms for Device Virtualization

New Paradigms for Device Virtualization Image Credit: /Bigstockphoto.com

This article is written by Ulises Olvera-Hernandez, Senior Principal Engineer at InterDigital and Chathura Sarathchandra, Staff Engineer at InterDigital.

The world is on a path towards ubiquitous computing. This path will take us away from today's paradigm, where the majority of our computing and communications experiences are device centric. There’s set activities we do on our PCs compared to our phones and tablets. Our graphically intensive video gaming is done on a console device or a high-performance PC. We do different things on our work computer than we do on our home computer. Each of these scenarios is somewhat compartmentalized for us as users. Cloud computing has meant that certain applications like our email are essentially seamless from one device to another, but more dynamic and computationally intensive applications like gaming, video conferencing, streaming media and AR are more complex, so it's more challenging for us to switch between platforms.

And yet, the lines in our lives between work and home, between gaming and AR, between video and social media are becoming less compartmentalized all the time. Device virtualization is one technology that will help usher in this era of ubiquitous computing, where all of these scenarios become less device-centric and more experiential, where the scenario focuses on the user experience more than on the device itself.

Enhancing the working world

Imagine that the pandemic is over and you are back working in the office. You're meeting with your colleague Adrienne and both of you start watching a video on your mobile phone but decide it's easier to switch the feed to her desktop monitor. Once you finish the video you decide that Greg, one of your other colleagues, needs to see this video as well. So, you leave Adrienne's office to walk down to find Greg, and you're watching it on your phone again. You spot Greg and ask him if he's got a couple of minutes to watch a video about the project you're all working on. He agrees and you place your phone next to the NFC sensor next to his monitor, where the video you want him to see comes up on the display.

All of these video viewing experiences began with your own device but hopped seamlessly from screen to screen and back to your device, on-demand, without the need for you to switch to a messaging app to send the link to the other person. The process by which all of these hops happen is referred to as device virtualization, and it's an important enabling technology that will help usher in the era of ubiquitous computing.

The key to device virtualization: disintegration

Device virtualization is essentially a process of taking an integrated set of device functions and disintegrating them, a process where the device functionality is partitioned so it can be distributed and managed in a dynamic way. Once the various functions of a device (be it mobile or otherwise) are virtualized, they can be dynamically composed to provide various functions to the user. Essentially this means that once a collection of mobile functions is running on a mobile device in such a way that the functions can be moved and recombined around the user, the mobile experience is no longer constrained to a particular device.

For example, we can set up a policy so that when a user connects to the home WiFi, display functionality is moved to the nearest TV. Likewise, there's a processing functionality in play so that when the mobile device connects to that home WiFi, the processing aspect of the experience is offloaded on to the more powerful home computer. All of this disintegration therefore streamlines and enhances the user experience, allowing them to take advantage of the most appropriate, most powerful, and most resource-efficient devices as they move around their environment.

Pushing the paradigm further

Taking it one step further, we can begin to incorporate machine learning into this paradigm. As such, instead of relying on predefined policies to handle these disintegrated functions, we can use AI/ML to infer device functionality based on user behaviors. In this way, the user experience can be much more dynamic because over time the system becomes increasingly aware of each user's intent and can customize their experience, and the necessary processing and resource allocation functions, accordingly. The virtualized device can determine the best execution scenario automatically given the current conditions.

This technology has much broader applications than just display technology. The same principles outlined in our office user example above can be applied to other device types and experiences. As we go lower in the stack and are able to decompose all of this functionality, we are able to get into different form factors.

With most current devices -- phones, TV, tablets, etc.  -- everything is pre-packaged, starting with the motherboard of the hardware, up to the operating system and the application layers. If we take a layered approach to this, instead of a device-centric one, we can look through the entire stack for opportunities to virtualize various functions. This can lead us to different approaches to decomposition and partitioning that could apply equally well to IoT or other personal, enterprise or industrial type devices. Virtualization, in this way, begins to free these experiences from the constraints of particular device types.

The surmountable challenges

At this stage, one of the biggest challenges facing this approach is at the application layer: there's an almost infinitely wide range of application types. It's relatively straightforward in a laboratory environment to develop a fairly specific set of APIs that allow us to demonstrate how one or two given use cases can work across a handful of device types and a few applications. But can we take any application that exists today and dynamically partition, then dynamically distribute them all based on any user's context in a real-world environment? That's a much bigger challenge, involving partitioning applications that are already packaged.

This kind of application partitioning requires inspecting the code, sometimes at the executable level, of each of these applications in order to determine what possible functionalities can be virtualized. It's a challenge that can be overcome but it will take time. As virtualization technologies become more commonplace, applications will be developed from the ground up with virtualization as part of their central framework.

These technology challenges can (and probably will) be overcome. Some of them also present interesting business challenges too, because different resources are going to be required and business units may need to be redesigned to address those different resource needs. But those are surmountable challenges as well.

Chathura is a Staff Engineer at InterDigital. He currently drives research and innovation work on mobile computing and virtualization techniques, through development of concept/theory, design and proof-of-concept. Prior to this, he was a researcher at University Essex contributing to a number of UK, EU and International projects.

 
 
NEW REPORT:
Next-Gen DPI for ZTNA: Advanced Traffic Detection for Real-Time Identity and Context Awareness
Author

Ulises Olvera is a Senior Principal Engineer for InterDigital Europe. He is responsible for development of 5G 3GPP Core Network technology and 3GPP System Architecture Evolution Standards at InterDigital. Prior to joining InterDigital, Ulises spend 15 years at Ericsson developing Cellular Systems and supporting Ericsson products in International markets. 

PREVIOUS POST

Why Enterprises Can’t Buy Their Way to Becoming Cloud Native

NEXT POST

Push to Eliminate 'Digital Poverty' to Drive Demand for Satellite-Powered Broadband Connectivity Post Pandemic