Info Image

Pushing the Edge Toward Transformative Customer Experiences in the Data Center Space

Pushing the Edge Toward Transformative Customer Experiences in the Data Center Space Image Credit: /Bigstockphoto.com

The path to technological transformation for the enterprise runs to - and through - the edge.

News coverage of edge computing frequently referencesfuturistic use cases, such as autonomous vehicles and just-in-time 3D print manufacturing. While these scenarios might soon become widespread reality, organizations of all types see the edge immediately ahead of them. Moving from on-premises IT solutions to the cloud and beyond are hyperscale cloud providers, enterprises, 5G carriers, media, and gaming companies - not to mention higher education institutions newly focused on distance learning and governments capturing intelligence for national defense.

Many of these organizations have thrown away the crystal ball and are deploying their edge computing strategies right here, right now.

How did we get here?

The evolution toward the Edge began with Web 1.0 in the 1990s, when bandwidth was wired and slow, the concept of latency was just emerging, and computing was PC/client server based. The next iteration - Web 2.0 - introduced broadband and mobile computing, where every person could interact via social media. Latency became more pressing, and became scrutinized and measured more than ever, as demand for crisp video images took hold with the rise of gaming, downloading movies, and other interactive applications. Computing moved in the cloud/SaaS realm and infrastructure was regionalized.  

Internet 3.0 or Web 3.0 portends a ubiquitous 5G where everything is connected. Latency means high sensitivity where computing is at the edge or at the device, and infrastructure is localized. In this scenario, edge data centers enable new latency-sensitive applications and minimize the overflow of network traffic going to and from the cloud, keeping local traffic local.

With Web 3.0, IT infrastructure will undergo an even greater evolution. Ubiquitous 5G access will mean most of us will have a faster broadband connection on our phones than we do in the office. The ever-expanding Internet of Things (IoT) trends mean not only is every person creating content, but everything is now generating data and doing so everywhere imaginable. Applications like augmented/virtual reality and robotics are demanding ultra-fast network response with less than 10 millisecond latency. Where PCs and centralized clouds dominated the compute landscape in former eras, small, embedded edge computing devices are the primary compute platform of Web 3.0. 

In response to this new era of the internet, IT infrastructure is evolving in two fundamental ways:

Infrastructure is becoming much more agile and software driven. We’re moving away from traditional telecommunications-based models, where assets and connections were once physical entities, provisioned slowly, and managed by the provider. Now, thanks to digitally driven models, these same assets are increasingly virtual, provisioned instantly and on-demand and managed by the users themselves.

We are now seeing a shift to a geographic, localized edge model that provides fasterandbetter serviceto customers, consumers, and end users. However, it’s important to note that the edge is not simply a matter of geography or distance; instead, it’s a ubiquitous edge that can exist in different modes. Sometimes the edge will be geographic, extending from Tier 1 cities to more rural markets. It may also be application-specific, depending on a company’s core business and application latency-sensitivity, which can vary depending on whether it provides content, networking or computing.

Understanding these changes, and how to adapt to them, will be the key for data centers and the enterprises they support to capitalize on all that Web 3.0 has to offer. 

Such an experience will lead to real business benefits for enterprises in the form of:

Greater Agility: IT leaders will be able to better adjust and align their infrastructure with the demands of their applications in real-time, eliminating the need to overbuild capacity.

More Efficiency: Organizations will be able to automate provisioning and management efforts and reduce the burden on existing IT staffs.

Improve Scale: IT infrastructure will be better able to scale to meet the demands of new applications at the edge and to improve overall reach to partners, service providers and systems.

The combination of software-centric assets and architecture is already changing the way IT leaders deploy and manage their infrastructure.

Edge-networked colocation has a distinct customer focus in the data center. This brand of colo has its own heritage that was, initially driven by code, from a telco-centric model with fixed/centralized data centers, and physical connections. It was provider-managed and cross-connected manual/slow provisioning with opportunities for change. The more recent software-centric model uses flexible and distributed clouds with virtual connections and cross-connects - and instant programmable provisioning that is user-managed.

In these edge-evolved platforms, a “LEGOs-type” infrastructure is “an interlocking environment where discreet infrastructure elements - data centers, servers, networks, and interconnections - need to be managed and coordinated among multiple parties. A better iteration for 2021 could be likened to “Infrastructure as Minecraft,” where one programmable pool of computing and connectivity resources can be activated anywhere and managed in the palm of the hand.

As infrastructure moves to the edge, so do the geographic requirements: Tier 1, 2,and 3 markets now transition to small towns/rural centers to remote cell towers. The same is true for applications: gaming moves closer to the video provider; the telco needs to move closer to the mobile provider; and the cloud needs to move closer to the compute provider edge. Latency will move from a 50-100 millibits per second standard, to 10-50 ms to under 10ms. This is all ever evolving.

So, in one sense, the edge is geography-specific. Today, much of the Internet’s content is still centralized in data centers located in tier 1 and 2 metros. Emerging IoT applications and sensors may someday create the need for an edge that can support compute and storage in remote locations, but it’s just as likely the edge will continue to be built out in large cities as well.

In another sense, the edge is application-specific, and the configuration of its infrastructure varies accordingly. The edge for a hyperscaler that’s building a new availability zone in a tier 2 metro requires several thousand square feet of space and several MW of power, but this doesn’t warrant building a dedicated facility. In that case, an existing, multi-tenant data center (MTDC) provides the ideal solution. For a network provider looking to deploy new 5G towers - or extend an undersea cable landing further inland - the edge may require far less space and power. It also might be in a unique location where no data center is available. Here, a modular, micro-data center is the perfect option.

The third element inherent to the edge is that is be performance/latency-specific. For instance, video and e-commerce applications may continue to be adequately served by data centers, cloud platforms, and CDNs in the tier one and two metros where they exist today. Those applications perform fine even with the 15-100ms of latency that occurs between a user and a server hundreds of miles away. However, augmented reality or gaming applications that require sub 15ms response times may need to reside far closer to end users. That could be in the form of a modular solution in one geography or a large MTDC in another.

A ubiquitous, multi-modal edge in three platforms

The most successful data center providers think of the edge as ubiquitous - everywhere and multi-modal - existing in different forms, depending on the business and application it’s supporting. This strategy is rooted in the idea of helping IT leaders ensure that their applications and data are deployed on the right platform, at the right edge, at the right time, and for the right audience. To support that, successful data center providers develop three specific edge platforms:

A hyperscale edge that provides a geographic footprint ideal for webscale cloud, content and compute resource providers seeking to build or expand their availability zones in tiers one and two. This hyperscale edgereduces congestion within the metropolitan area, creating scalability and multi-node availability as edge infrastructure builds out for the next ten years.

An interconnect edge that gives network providers, SaaS developers, and enterprises seeking a hybrid IT strategy the open and neutral-access interconnections their applications demand. A portal enables software-programmable access and control of entire infrastructure ecosystems and the ability to interconnect with a wide set of platforms, managed through a single pane of glass.

low latency edge that can put workloads as close to end-users as possible, whether beneath cell towers, at cable heads,or anywhere the application needs to be placed. The low latency edge uses geographic-specific infrastructure builds via cloud node extensions, content caching, video, gaming, IoT/robotics, and remote sensor applications.

The data center at the edge stands in the middle of that triumvirate. It's a place where applications, data, and infrastructure are on the right platform at the right time, targeting the right customers.

The edge today is a byzantine mix of enterprises and markets - with each organization grappling with questions about which edge solutions will take precedence, and which business models will prevail. The winners will find “that” pace where applications, data, and infrastructure are on the right platform at the right time. For most enterprises, that time is now.

NEW REPORT:
Next-Gen DPI for ZTNA: Advanced Traffic Detection for Real-Time Identity and Context Awareness
Author

Raul Martynek is a 20+ year veteran in the telecom and Internet Infrastructure sector. Martynek joined DataBank in 2017 as the Chief Executive Officer. In his role at DataBank, Martynek works to define the strategic direction of the company and its operations.

PREVIOUS POST

BSS Transformation: The Low-Risk Approach using Sub-Brand Digital BSS for Organic and Commercial Customer Migration

NEXT POST

The Strategic Role of Semiconductors