If only those cloud vendors would stop innovating, we might eventually settle on “lowest common denominator” commodity services that could make “multi-cloud” more of a reality.
I’ve been writing about this for years; namely, that while vendors peddle the ability to run workloads across multiple clouds, the reality is that each cloud provider has native services that simply aren’t available on rival clouds. You can wish that weren’t true, but it’s true all the same.
And getting more true all the time.
Even the most cursory of reviews of the work that Microsoft, Google, Amazon, or Alibaba are doing suggests there really is no such thing as commodity services. But does this mean multi-cloud is completely, utterly dead? No, as MongoDB and others seem determined to demonstrate.
There’s no such thing as commodity in the cloud
But first, the dream! Workloads that magically work across different clouds! Analyst Corey Quinn, however, is skeptical. He has some concerns with that idea:
[T]he idea of building workloads that can seamlessly run across any cloud provider or your own data centres with equal ease… is compelling and something I would very much enjoy. However, it’s about as practical as saying “just write bug-free code” to your developers—or actually trying to find the spherical cow your physics models dictate should exist. It’s a lot harder than it looks.
Software (and cloud) simply don’t work that way. Thanks to the efforts of different cloud vendors, the universe of “lowest common denominator” keeps shrinking as cloud services like compute and storage gain in innovative differentiation, rather than dwindle down to a muddle of sameness.
For example, what could be more commodity than storage? Sure, if you say the words “object storage,” “block storage,” etc., then you can find the same thing in pretty much any cloud. But look deeper at, say, how Google Cloud builds storage, and suddenly things don’t look quite so “same.” Google Cloud has built its archive systems in software policy rather than hardware, which means coldline storage has the same access latency as top-tier storage.
Or what about compute? Total commodity, right? Well, not in the Amazon Web Services (AWS) world, which has been building a new class of processor, the Graviton. It’s built on 64-bit Arm Neoverse cores and a custom system-on-chip designed by AWS.
The effects are dramatically faster floating-point performance per core for scientific and high-performance workloads, lower costs, etc. (Disclosure: I work for AWS but the views expressed herein are 100% my own.)
Nor is AWS alone. Businessweek recently pointed out what’s happening:
As Amazon, Google, and Microsoft compete for cloud computing customers, the specific virtues of their chips may become a selling point, says Smugmug’s MacAskill. “It’s going to get pretty interesting when these cloud providers begin to differentiate themselves even further.”
This is, of course, already happening, and it means greater innovation for customers, even if it also may mean the multi-cloud mythology resonates a bit less. Not because cloud providers are trying to lock in customers and give them less, but precisely because they’re trying to give customers more.
Which is not to say that multi-cloud is a scam.
Microservicing your way to multi-cloud
MongoDB, for example, launched multi-cloud clusters in late 2020. What does that mean? According to the company, it means that “customers can distribute their data in a single cluster across multiple public clouds simultaneously, or move workloads seamlessly between them.”
To get benefits from this, separate microservices roll up into a single app and are deployed onto different clouds to get access to the best services each cloud offers.
This isn’t moving an application across clouds seamlessly. It’s not about lowest common denominator services that are commodified across clouds. It’s actually the opposite. It’s the act of invoking the best of each cloud, identified and reached through discrete microservices. In fact, a sizeable percentage of MongoDB Atlas multi-cloud clusters are running across all three of the major cloud providers.
This sounds great, and it is, to a point. That point begins (and ends) with microservices. Depending on your application, microservices might be amazing. How? Well, suddenly different teams can more easily collaborate on the same system without necessarily using the same tools (programming languages, runtimes, etc.).
Microservices also make it easier to build highly scalable applications, because instead of having to scale one monolithic application, teams can break up these applications into smaller services, and scale them independently. Great, right?
Well, yes. Also no. They might be the exact wrong thing (yes, sometimes monoliths are the way to go).
According to Temporal’s Ryland Goldstein, “The first issue people noticed when they switched to microservices was that they had suddenly become responsible for a lot of different types of servers and databases.” Martin Fowler goes one step further, arguing there are at least three great reasons for picking a monolith over microservices:
- A distributed, microservices-oriented architecture is harder to program, since remote calls are slow and are always at risk of failure.
- Maintaining strong consistency is difficult for a distributed system.
- Most companies lack the mature operations team capable of managing many services, most of which will be redeployed regularly.
All of which may mean MongoDB’s proffered multi-cloud solution has promise, but it may well involve more work than many companies might choose. As with most things in IT, the answer to multi-cloud is “it depends.” It depends on your organisation’s maturity, and how much you want the added complexity of picking best of breed versus centralising investments.
With multi-cloud, in short, your mileage may vary.