Pay-per-use hardware models such as HPE GreenLake and Dell Apex are designed to deliver cloud-like pricing structures and flexible capacity to on-premises data centers. And interest is growing as enterprises look for alternatives to buying equipment outright for workloads that aren’t a fit for public-cloud environments.
The concept of pay-per-use hardware has been around for more than a decade, but the buzz around it is growing, said Daniel Bowers, a former senior research director at Gartner. “There’s been a resurgence of interest in this for about four years, driven a lot by HPE and its GreenLake program.”
HPE has pledged to transform its entire portfolio to pay-per-use and as-a-service offerings by 2022. Additional programs include Apex from Dell, which earlier this year unveiled the first products in its portfolio of managed storage, servers, and hyperconverged infrastructure; Cisco Plus network-as-a-service (NaaS) from Cisco, which plans to deliver the majority of its portfolio as a service over time; Lenovo TruScale Infrastructure Services; and NetApp’s Keystone Flex Subscription storage-as-a-service offering.
On the adoption front, uptake so far is strongest in storage. Gartner predicts that in 2024, half of newly deployed storage capacity will be consumed as a service. On the server side, 5.6 per cent of on-premises x86 server spending will be consumed as a service in 2024.
But the model isn’t without its challenges, and here are five factors to consider.
The hardware might cost more
It’s a misconception that consumption-based models let companies acquire the same hardware at a lower lifetime cost than an outright purchase, Bowers said. “Too many people think of this as just a way to get cheaper hardware, that somehow they can game the system. And so they’re excited. But it’s not that. So don’t waste your time.”
In reality, a pay-per-use model is generally more expensive than buying the gear outright, particularly if an enterprise knows how much capacity it needs. If an enterprise knows it needs 100 servers for the next three years, for example, it would be less expensive to buy those servers outright.
Flexible capacity means operational agility
The appeal of consumption-based pricing is in aligning infrastructure costs with usage. The programs are designed to allow enterprises to easily scale resources up and down, and they shift the risk of overprovisioning to the vendor. The value comes from gaining operational agility.
A consumption-based model also can significantly streamline the procurement cycle. “Vendors give you what they call buffer stock. You’ve got extra equipment sitting there unused—dark equipment ready to go. When you need something, you just turn it on,” Bowers said. “So instead of waiting a week, two weeks, three months to order new equipment and bring it in, it’s already sitting there in your facility ready to go. You just power it on.”
Gotchas: Term commitments and minimum payments
Most programs today aren’t strictly pay-per-use, Bowers said. They combine fixed payments with some variable elements based on a measurement of utilisation. Actual pay-per-use would imply that a customer would owe nothing if no resources were used in a particular month.
“These programs almost always involve a long-term commitment, like three or four years. They always involve some minimum payment level, which is substantial,” he said. “It’s not like you can scale this stuff down to zero and pay nothing one month.”
Pricing: Financial smarts required
When enterprise IT teams get a quote for consumption-based infrastructure, many will find themselves in unfamiliar territory, having never evaluated this kind of pricing scheme.
“It’s easy for HP or Dell to come in and say how much they’re going to charge you per core, but then you realise you have no idea whether that price is fair. That’s not how you calculate things in your own facilities, and it’s apples to oranges versus public cloud costs,” Bowers said. “As soon as enterprises are given a quote, they tend to go into spreadsheet hell for three months, trying to figure out whether that quote is fair. So it can take three, four, five months to negotiate a first deal.”
Enterprises struggle to evaluate consumption-based proposals, and they lack confidence in their usage forecasts, Bowers said. “It takes a lot of financial acumen to adopt one of these programs.”
Experience can help. “The companies that make the most confident decisions are those that did a lot of leasing in the past. Not because this is a lease, but because those companies have the mental muscles to be able to evaluate the financial aspects of time, value, variable payments, and risks of payment spreads,” Bowers said.
The big difference between a program like HPE GreenLake or Dell Apex and leasing is that costs fluctuate month to month, depending on how much you’re using, with a consumption-based model. Leasing is strictly a finance program, and capacity doesn’t change in a typical leasing program.
Experience in the public cloud, which also involves variable expenses, can help, too. “Simply going from an annual budget to, ‘I don’t know how much I’m going to spend’ is a big shift. So having some cloud usage helps,” Bowers said.
At the same time, consumption-based pricing requires some of the same cost governance that public cloud requires so that an enterprise doesn’t get blindsided by unchecked spending. “You have to put cost controls on it to make sure that you don’t just turn on the faucets and let people spew resources wildly—exactly the controls you put on public-cloud usage,” Bowers said.
In general, enterprise companies that are well suited to evaluating a consumption-based model for infrastructure are those with large, centralised IT groups that are accustomed to doing project or department chargebacks for IT services. “Enterprises of that size and scale have done it well. Internally they kind of act like a mini service provider themselves,” Bowers said, “so they’re familiar with trying to align their costs.”
The model for data-centers is young
Workloads that are being considered for a consumption-based model are typically workloads that already exist on-premises and can’t be moved to the public cloud for latency or data sovereignty reasons. That doesn’t mean it’s a small market, however. “There’s tons of stuff that can’t go in the cloud, so this isn’t just a tiny sliver of the world,” Bowers said.
The dramatic 30 per cent to 40 per cent growth in public cloud spending, compared to a flat market for new storage and servers, can make it seem like spending for on-premises infrastructure is declining. But it’s not, Bowers said. “A misperception that a lot of end users have is that cloud is going up and on-premises is going down. Actually, people are buying just as much stuff on-premises—servers and storage—now as they used to.”
In the market for consumption-based infrastructure, the area that so far is gaining the most traction is storage. One reason is that storage is easier to price and understand. “Under these programs, a vendor is essentially giving you a vending machine that spits out terabytes of storage. You press the button. You understand what you’re getting,” Bowers said. “It’s easy to price, it’s easy to understand, it’s easy to adopt.”
Server metrics are more challenging. Vendors might charge per node, per core, per GB of memory, or per virtual machine, for example.
Most nascent is the market for consumption-based networking gear in the data center.
“The problem with consumption-based pricing for data-center networking is people haven’t figured out what metric to charge for networking,” Bowers said. “Do we charge by the megabyte transferred, or do we charge by the number of the ports that we deploy? Or the number of switches we deploy?”
As the players work through how to charge for network-as-a-service options, the broader market for consumption-based pricing models continues to grow at a healthy clip. Adoption rates are running at about 30 per cent growth, year over year.