Info Image

The Concept of the Cloud as a Wheel - Not an Arrow

The Concept of the Cloud as a Wheel - Not an Arrow Image Credit: marchmeena/BigStockPhoto.com

For years the canon of the cloud was simple: Put everything in the public cloud and stay there forever. This model made sense as businesses optimized for elasticity, developer agility, service availability and flexibility.

As those businesses reached scale, however, things changed. There are still many reasons to run your workloads on the public cloud, but there are just as many reasons to run them on the private cloud. 

The cloud operating model (philosophy not location) is a wheel. It has a cycle. There are times to leverage the public cloud. There are times to leverage the private cloud. There are times to leverage the colo model. The cloud is not an arrow heading into the public cloud until there is no data left behind. While the cloud industrial complex may have you believe that - it is because it is in their interest, not the enterprises.

This post outlines the contrarian view of the cloud - one that is focused on the enterprise’s needs first and foremost. 

Every boardroom is having this conversation

As noted, early cloud adopters valued flexibility, elasticity and developer agility. The public cloud promised all of these and essentially delivered on them all - driven in large part by the exceptional success of Amazon S3. As a result, enterprises piled into the cloud, cheered on by analysts and investors with the promise of greater efficiency and higher productivity.

And then the bills started ballooning.

Those same organizations realized that flexibility, agility and elasticity came at a fairly significant cost and as their workloads grew, their costs did too - but not linearly, at a multiple of what they anticipated - fueled by egress and access charges.

As these workloads grew, they also became more mature and their characteristics more predictable. With the knowledge of how performance requirements and demand attributes played out, the utility associated with flexibility and agility declined. The utility of predictable costs increased.

Given that the data stack can comprise 20-40% of a company’s costs, the conversation about optimizing the cloud operating model is no longer one that occurs inside the CIO and/or CTO suite - it is now a CFO, CEO and board level discussion that permeates the entire organization.

Given the rising cost of capital and ubiquity of the cloud operating model, boardrooms are discussing optimization strategies - and what you are optimizing for determines what cloud(s) the organization runs on.

Understanding the principles of the cloud operating model

The cloud is not a physical location anymore. It was at one point, and it was the training ground for the developer-centric model that enterprises now embrace. That time, however, is gone. At this point, the tooling and skill set that was once the dominion of AWS, GCP and Azure is now available everywhere. Kubernetes is not confined to the public cloud distributions of EKS, GKE and AKS - there are dozens of distributions, and they are effectively indistinguishable. Grafana works on the public cloud, the private cloud and the edge.

While the cloud-industrial complex would have you believe the cloud is a place (public) your developers know better. They know the cloud is about engineering principles, things like containerization, orchestration, microservices, software-defined everything, RESTful APIs and automation.

Understanding these principles and understanding that they operate just as effectively outside of the public cloud creates true optionality and freedom. To be clear - there is no “one” answer here - but with the cloud operating model as the guide there is optionality. Optionality is good.

Understanding the lifecycle of the cloud

There is a certain kind of learned muscle memory associated with having a cloud native application. Those early adopters learned principles of the cloud at a steady pace. Over time their workloads grew and their costs ballooned. The workloads and principles were no longer novel - but the cost to support the workloads at scale was.

For enterprises it has become clear that the value associated with agility, elasticity, and flexibility has been inverted by the costs of remaining on the cloud.

This is the lifecycle of the cloud. You extract the agility, elasticity, and flexibility value, then you turn your attention to economics and operational acuity.

Can you live in a hotel forever?

Think of the difference between the private cloud and the public cloud as the difference between a hotel stay and an apartment lease or mortgage payment.

Hotels are nice places to visit, but you wouldn’t want to live there. In a hotel, you can change rooms, get maid service, and when something breaks, you simply make a phone call and someone else promptly arrives to fix it. The tradeoff, of course, is cost. Those benefits of living indefinitely in a hotel are going to start wearing off as soon as you get the bill.

Conversely, your apartment lease or home comes with a predictable cost. Yes, you may use more energy or water one month to the next but the overall bill is pretty consistent and a fraction of the hotel.

Finding balance across all of your needs

This is not a post about repatriation - it is about optimization. What you are optimizing for should help you determine where you should run your workload. We believe the best approach is to stay vendor agnostic by choosing technology that exists across cloud marketplaces like AWS, Azure, GCP and IBM and gives you access to a variety of Kubernetes distributions like EKS, GKS, AKS, OpenShift, Tanzu, Rafay. That is the definition of multi-cloud.

We talk about balancing needs and optimizing for workloads. Again, some workloads are born in the public cloud. Some workloads grow out of it. Others are just better on the private cloud. It will depend.

What matters is that when your organization is committed to the principles of the cloud operating model you have the flexibility to decide and with that comes leverage. And who doesn’t like a little leverage - especially in today’s economy.

NEW REPORT:
Next-Gen DPI for ZTNA: Advanced Traffic Detection for Real-Time Identity and Context Awareness
Author

Ugur Tigli is CTO at MinIO. In his current role, he oversees enterprise strategy and interfaces with MinIO’s enterprise client base. He helps clients architect and deploy API-driven, cloud native and scalable enterprise-grade data infrastructure using MinIO. Ugur has almost two decades of experience building high performance data infrastructure for global financial institutions. Prior to MinIO, he was a technology leader at Bank of America, where he served as the Senior Vice President, Global Head of Hardware Engineering. He joined Bank of America through the acquisition of Merrill Lynch where he was the Vice President for Storage Engineering.

PREVIOUS POST

Push to Eliminate 'Digital Poverty' to Drive Demand for Satellite-Powered Broadband Connectivity Post Pandemic