Info Image

What it Takes to Build an Interconnected Edge and Beyond

What it Takes to Build an Interconnected Edge and Beyond Image Credit: vladimircaribb/Bigstockphoto.com

The levels of speed and accessibility we now thrive on in our IT landscape mean that we no longer operate in a world where mere access to a digital service is enough. Instead, as end users become increasingly dependent on high-bandwidth and low latency applications - and as businesses depend more on data-driven strategies to evolve - quality of service has become king. 

Whether it’s mobile banking in finance, distance learning capabilities in education, hybrid cloud frameworks for enterprises or content delivery for major names like Netflix or Amazon, next-gen capabilities have become the norm - not the exception. This means that competitive advantages are no longer derived from the ability to keep up with digital transformation and IT demand, they’re derived from the ability to maintain consistent, reliable and highly gratifying experiences across the entire spectrum of use cases.

IT capabilities must be robust enough, fast enough and seamless enough to endure in an always-on world of high consumer expectation. To achieve this, organizations must look to one crucial element of their architecture - networking and connectivity.

We’ve all heard it repeated ad nauseam: The network is at the core of today’s IT. But this is for a good reason. A capable network that reaches from the edge (read: the user themselves) to the core and delivers exceptional speed and mobility is paramount. But what does establishing a truly capable network look like in practice?

What’s at the core of connectivity?

There’s a reason that interconnection has been at the forefront of data discussions for some time now. The number of endpoints organizations must connect to and enable data to flow through is growing - between end users, business locations, clouds, data centers and beyond, all these locations need to be captured in a new type of connectivity fabric. Here’s what this means: Reach and diversity must be prioritized from the edge, to the cloud, to the data center and back again.

Diversity as an imperative (as a result of user and business demand for unfailingly available and reliable IT) is changing the landscape of network consumption. What was once one path between geographically distant nodes is now becoming two or three. Route diversity within and between any given point of presence isn’t an option - if unexpected downtime occurs, businesses lose value through decreased customer trust and loss of data insight. Therefore, choosing carrier-neutral data centers is a mainstay of data center selection.

It’s also within this connectivity and interconnection discussion that we see why edge data centers have continued to carve out a well-established place for themselves. In fact, this need for an edge-centric topology underscores the global edge data center market’s expected growth to $13.5 billion in 2024 - double the market size when compared to 2020. The edge is where connectivity is, in many ways, perfected. Bringing data closer to the user is crucial for maintaining low latency and high speeds, making it a great solution for emerging 5G use cases, IoT applications and beyond. But is this even enough now?

As larger amounts of data continue to travel over greater distances, this only heightens the need to reconcile speed and geographic reach. Edge data centers, despite being paramount for digital success, may not always solve the dilemma of physical distance that exists beyond the markets they reside in. So, what’s an enterprise to do?

Getting to the edge itself is only one half of this battle - the other half is how businesses cultivate their connectivity at the edge to truly gain advantages and get the most out of a single presence. This begs the question: Can the right data center solve for connectivity both with immediate proximity to nearby users and with interconnection to a greater network topology? Yes, but only with the right ecosystem.

Peering and the rise of the data ecosystem

Peering access is now one of the most important assets a data center can offer, and peering itself is one of the most advantageous actions an organization can take in the pursuit of robust connectivity. Internet exchange points (IXPs or IXs) are the key to maintaining speed and efficiency while still enabling data to reach many geographically diverse and distant points.

In essence, peering is when networks can directly connect to exchange traffic. This averts the need for third-party intermediaries as traffic transporters, which can add transport costs onto connectivity frameworks. At scale, this cost savings can become substantial. More importantly though, performance is notably improved through peering. When handing traffic off to upstream transit providers, control over what path the data takes to its destination is lost. This could mean, in extreme cases, that traffic may be sent to an intermediate destination that is in a completely different direction than the ultimate landing place - a process known as tromboning. Every additional hop and mile added along the route is latency, and added latency means a suffering customer experience or business strategy.

The upshot is this: Colocating in an edge data center is a huge step in the right direction when it comes to meeting new interconnection needs, but colocation within a data center that delivers IX access is the solution everyone should be after. Data centers can no longer just be judged on their location alone - although this does play a huge part. Instead, their connectivity ecosystems are, and will continue to be, what sets them apart. As such, organizations looking to get the best of high speeds, low latencies and greater geographic access should be choosing accordingly.

NEW REPORT:
Next-Gen DPI for ZTNA: Advanced Traffic Detection for Real-Time Identity and Context Awareness
Author

Todd Cushing is a nationally recognized Data Center and Telecom Executive with 25+ years of experience. Todd’s primary roles include the design and oversight of the technology infrastructure of 1623 Farnam data center, establishing and maintaining relationships with telecom carriers and providing technology direction and support for customers. Prior to joining 1623 Farnam, Todd was a Tenant Representative Broker for CBRE Data Center Real Estate and Olsson Engineering focusing on site selection and design for data centers worldwide.

PREVIOUS POST

Harnessing Intelligence at the Core with DPI for vEPC

NEXT POST

Cloud Strategy - One Size Does Not Fit All