Info Image

Businesses Are Still Moving to the Cloud

Businesses Are Still Moving to the Cloud Image Credit: peshkov/BigStockPhoto.com

Remember the cloud? It was all anyone could talk about a few years ago. Any launch of a product had to extol its cloud features, while new startups talked of all the exciting new things they were doing in the cloud. The hype was so extreme that it inspired a backlash, with cynics claiming that 'the cloud is just someone else’s computer', a reductive statement that failed to recognise the very real benefits of cloud technology.

The zeitgeist has moved on, and it’s now easy to think of the 'move to the cloud' as yesterday’s news. Today we can see the same pattern with AI, where some of its capabilities are overhyped, a backlash (some of which is misconceived), and the actual benefits lost in a sea of misinformation and buzzwords.

But, in time, AI will be more like the cloud, where businesses and providers will simply get on with making the most of what the technology has to offer.

Continuing growth

Whether it’s moving new services or data to the cloud, or just starting the process, businesses are still migrating to the cloud. Gartner predicted late last year that end-user spending on public cloud services would grow by over 20% this year. While we tend to focus on the early adopters and innovators, there are many businesses (and people) who are cautious - so-called laggards - who simply want to see how things shake out before making any big decisions.

Plus, there is the ongoing effect of the pandemic. Many businesses will have either used cloud services for the first time or will have massively increased their use thanks to remote or hybrid working. We’ll likely see the effects of this for a long time to come as businesses ask how else the cloud can help beyond collaboration tools.

And then there are the new applications that require computing power, including AI. These require a great deal of data and processing power, and the cloud is often the best place for these to exist.

The kernel of truth in the 'someone else’s computer' meme is that the cloud is not some ethereal other realm, but exists in datacentres - and those datacentres will play a key role in supporting increasing data and cloud demands. But keeping up with increasing demand will mean integrating new technologies, and ensuring what already exists is working as well as it can.

One way of measuring how datacentres are responding to increasing needs is by their use of power. According to McKinsey, power consumption by datacentres will increase each year by 10%, reaching 35GW by 2030, up from 17GW in 2022. This will come through a combination of planning and building new datacentres, and upgrading those that already exist, with more servers, more cooling, and better connectivity. All of this will require ongoing testing and maintenance to ensure the best possible quality of service.

Upgrades, test and measurement

Photos of the banks of servers that make up a datacentre, static apart from the hum of fans and few flashing lights can make it seem as though it’s as simple as building a datacentre and 'letting it run' - but this is far from the case. Following the initial build, ongoing infrastructure upgrades are integral to operations. The move to faster networking, including to 400G, will massively increase data throughput but 'plug and play' installation just isn’t possible. Investment in fibres, connectors, connections, and more, is needed. With networks connecting to the datacentre also in the process of being upgraded, similar upgrades are taking place inside perhaps at an even faster rate.

More than before, technicians will need to maintain the datacentre as it evolves, and this will mean continuous testing of data centre architectures containing new cabling and transceiver technologies. This includes fibre optic cable certification, connector inspection, and validation of transceiver optics and ethernet/fibre channel links connecting switches and servers. Strict service level agreements and the need for always-on data means that technicians need to find faults on both live and dark systems - downtime to fix faults must be avoided at all costs. And, of course, there is a need to monitor links into and out of the datacentre, and between locations, which requires a different testing regime altogether.

Changing technology will also have an effect. Right now we’re seeing the shift to 400G as a standard, and in recent years we’ve seen a shift to single mode fibre as this has become cheaper and has better standards attached. But multimode fibre may be making a comeback - datacentres that want to stay competitive need to keep one eye on this technology, and whatever is next after 400G.

In much the same way as the advances in optical technology is often ignored by some press in favour of wireless technologies, the hype around the cloud has dissipated somewhat. But just as optical technology moves on and datacentres need to keep on top of it, so businesses will continue to embrace cloud technology, making ever great demands on what is way more than “someone else’s computer”.

NEW REPORT:
Next-Gen DPI for ZTNA: Advanced Traffic Detection for Real-Time Identity and Context Awareness
Author

Nicholas Cole is a Solution Manager at EXFO focused on enterprise and datacenters. He has spent the past 15 years working with telco and cloud providers designing products for testing fiber access and hyperscale networks. Nicholas holds a BA in business management and is a member of the British Standards Institution for fiber optics.

PREVIOUS POST

Push to Eliminate 'Digital Poverty' to Drive Demand for Satellite-Powered Broadband Connectivity Post Pandemic