Reseller News

Shared source won’t fix the AWS problem

Open source companies think the right licence will protect them from being crushed by the big cloud providers. It won’t

I have a great deal of respect for my colleague Matt Asay, who works for Amazon Web Services (AWS) and writes week after week about the advantages and virtues of open source. However, this is not to say that I agree with him.

In fact, I would suggest I more commonly disagree with him on a great many things, including his most recent column suggesting “shared source” or licence tricks might be a solution for the competitive problem created by AWS specifically and cloud computing generally.

I do not just disagree with him. I think that, like his beloved Arsenal, he missed the ball.

Open source motivation

For developers, open source is about access and collaboration. I can start coding without creating a vendor relationship—especially since I may discover a better solution halfway through. Essentially, I do not have to get married to go on a first date. To get my application written, I may need a feature that is missing. I may need a bug fixed. In the worst-case scenario, I can fix it myself. I am also partly immunised from the machinations of vendor alliances and break-ups.

With shared code and a shared knowledgebase, I can work with others. I can even work with people that do not work at the same company as I do or even on the same type of application. We help each other by making the code better, making the documentation better, and asking and answering questions.

Vendor motivations are different. The corollary to access is adoption. From an economic standpoint, making the software free to use, inexpensive to adopt, and free to modify is everything a company could do to satisfy market demand. This is why software companies embrace open source licences.

Open source is also a force for commoditisation and standardisation. Many years ago web servers were big money. Now they are embedded everywhere (largely based on open source) and no longer a moneymaker for the industry. Web server software has become a low-level commodity. Companies often release things in open source to cause a standardisation effect. You can find this motivation behind Google’s Chrome and Kubernetes.

Open source cuts both ways

Failure is the default in business. If you build it, probably no one will come. Open source helps companies create greater adoption and a larger market share. However, it essentially sets the price to zero. As Asay pointed out, capturing the value of that market share is a challenge.

In the past, vendors answered this with an “open core” or freemium model. Some part of the software was free, and some part was not. This was usually something like giving you a Honda but selling the tires for $20k or 30k—and if you do not like it, well, build your own tire factory.

The problem with open core is it breaks open source’s collaborative motivation. In order to run the fully supported “Enterprise” edition (read proprietary) one must forgo the benefits of open source. If you fix something or add a feature to the open source version—then you either have to forgo the benefit of the supported version or wait for the vendor to decide to add your code and produce an official release. Open core effectively means no more collaboration.

In all successful open source projects, the “free riders” who use but never contribute to the project outnumber the contributors by several orders of magnitude. However, in open core projects, it is extremely rare that external contributions are far above zero. When they happen they are usually the result of a vendor partnership agreement (e.g., SAP contributes SAP integration).

Now we are moving to a utility computing model where companies create “software as a service.” Assuming there is an open source version, even running it may require setting up your own parallel AWS-knockoff. One can expect the external contributor pattern to match open core, possibly in a more pure form (read absolute zero).

For smaller companies seeking to use open source for adoption, the benefit maxes out at some point. Basically, Elastic and MongoDB are not gaining new developers just because their software is open source.

Elastic might lose some customers to competitors by not being open source, but presumably someone has calculated the loss and decided that better value capture is worth the negative PR. As Elastic and MongoDB have pointed out, they do not have external contributors anyhow.

Amazon’s fork is not important

Amazon’s fork of Elasticsearch is predictable. However, Amazon’s motivations are likely PR and cost savings in collaboration. Amazon could produce compatibility layers for MongoDB and Elastic—even if they were not open source. In fact, Amazon has done this, as when they created Babelfish to run SQL Server applications on Amazon Aurora.

It is unlikely that Amazon’s Elasticsearch fork will attract regular contributors outside of perhaps Microsoft and Google or some Amazon partners. There is no more incentive to contribute to Amazon’s project than there is to contribute to the original. No matter what, Amazon will continue to provide alternative versions and compatibility layers to whatever software achieves a high level of market adoption, whether that software is open source or not.

From a business perspective, it is unlikely that someone will run the Amazon fork of Elasticsearch just to avoid paying Elastic if they would have otherwise. So it is just irrelevant.

Who can compete with AWS?

So really, open source is just a red herring here. The availability of the source code is merely a speed bump to Amazon putting up a compatible alternative to MongoDB Atlas or Elastic Cloud. Any of these companies, if asked, will talk about how Amazon takes their code and contributes nothing (so-called “strip-mining”). However, open source is irrelevant to the core problem.

The question becomes, can startups and smaller technology companies achieve sufficient industry adoption and compete with AWS?

Can you provide a better cloud service on AWS than AWS can? In the short term, that is certainly possible. Tell your investors this and they will call it an “execution play,” which is actually a derogatory term. You can sum it up as “my plan to win the race is to run faster than my opponent.”

However, your opponent has a head start, more training, better doctors, the latest and greatest steroidal medical enhancements, and almost infinite cash reserves to invest. AWS cannot change directions as fast as you can, but most of the running is in a circle or a straight line.

Investors want to hear how you will “differentiate” your product in the market. So far, most vendors have clasped on to the obvious differentiator: The one thing AWS will not do is multi-cloud. Whenever I find myself around AWS employees I just say multi-cloud as many times as possible because I am in my heart an Internet troll.

For most companies, multi-cloud is actually just cloud portability. Few organisations actually run multi-cloud applications as a regular course of business. As a differentiator, multi-cloud is weak. Only the biggest companies care about it. It lets them negotiate with their cloud providers. It lets them deal with international deployment. It lets them handle multi-region outages (though most do not bother or they would go down less often).

Besides multi-cloud? Innovation, perhaps. This is not to say that true technological advancement cannot happen, but it does not look like another indexing technology or database with incremental improvements. It would look like something that negates the need for either.

It would be something that meets a pressing need—that unlocks a new possibility or dramatic increases in efficiency. You could consider “serverless” to be such a technological advancement. Developers just want to code. They really do not want to think about deployment or operations.

Open source isn’t the only route to adoption

In the cloud, one might ask, is open source the only route to mass adoption? Is open source the best route to mass adoption? The first answer is definitively no. From AWS Lambda to most of Microsoft Azure’s APIs—they are well documented, they have tutorials, they have a user community, and you cannot fork the implementation on GitHub.

What is the answer to the second question? It depends. It is too early to say whether a product like Fauna, a serverless database, can achieve mass adoption with a unique database technology provided strictly as a service.

Most companies still deploy on the bare VMs as opposed to other IaaS offerings. So open source is still a very viable route to adoption for many technologies. Change takes time. If we end up at the tipping point where most applications are “serverless” and use a set of “as a service” offerings, will open source infrastructure, the software beyond low-level libraries or toolkits, bring any real value?

So shared source—who cares?

It is not the licence that is stopping people from contributing to MongoDB or Elastic. It is a lack of motivation. Why should I? There is nothing in it for me.

It is not the open source licence that allows AWS to give MongoDB or Elastic a haircut. It is market power, money, and the move to utility computing.

How does this play out? How can software vendors compete? Either they all move to compete at the application layer, and we assume the big three clouds will eat the entire IT infrastructure industry, or someone will invent a new business model (very rare) or new technology (less rare) for which there are no immediate alternatives (maybe at the edge). Or maybe software companies fly below the radar (profitable but too small for AWS to notice) or they just truly run faster (execution).

However, this is a business problem—open source is just a red herring. We do not need more licences. Regardless, no change in licensing will affect either the profitability of MongoDB or Elastic or the volume of external contributions to their software.