Menu
Why AI investments fail to deliver

Why AI investments fail to deliver

The success or failure of AI initiatives has more to do with people than with technology.

Credit: IBM

The COE, led by a chief analytics officer, is best positioned to handle responsibilities like developing education and training programs, creating AI process libraries (data science methodology), producing the data catalog, building maturity models, and evaluating project performance. The COE essentially handles duties that benefit from  economies of scale. These will also include nurturing AI talent, negotiating with third-party data providers, setting governance and technology standards, and fostering internal AI communities.

The COE’s representatives in the various business units are better positioned to deliver training, promote adoption, help identify the decisions augmented by AI, maintain the implementations, incentivise programs, and generally decide where, when, and how to introduce AI initiatives to the business. Business unit reps could be augmented on a project basis by a “SWAT team” from the COE.

Not embedding intelligence in business processes

One of the most common stumbling blocks in deriving value from AI initiatives is incorporating data insights into existing business processes. This “last mile” challenge is also one of the easiest to solve using a business rules management system (BRMS). The BRMS is mature technology, having been installed in large numbers since the early 2000s, and it has gained a new lease on life as a vehicle for deploying predictive models. The BRMS makes an ideal decision point in an automated business process that is manageable and reliable. If your business is not using a BPM (business process management) system to automate (and streamline and rationalise) core business processes, then stop right here. You don’t need AI, you need the basics first—i.e., BPM and BRMS.

Most modern business rules management systems include model management and cloud-based deployment options. In a cloud scenario, citizen data scientists could create models using tools like Azure Machine Learning Studio and the InRule BRMS, with the models deployed directly to business processes via REST endpoints. A cloud-based combination such as this allows for easy experimentation with the decision-making process at a far more reasonable cost than a full-blown AI program.

Failure to experiment

Now we get to the other side of the coin. How do you use AI to create new business models, disrupt markets, create new products, innovate, and boldly go where no one has gone before? Venture-backed start-ups have a failure rate of about 75 per cent, and they are at the bleeding edge of AI business models. If your new AI-based product or business initiatives have a lower failure rate, then you are beating some of the best investors out there.

Even the most elite technology experts fail, and sometimes often. Eric Schmidt, former CEO of Google, disclosed some of the company’s methods during 2011 Senate testimony:

To give you a sense of the scale of the changes that Google considers, in 2010 we conducted 13,311 precision evaluations to see whether proposed algorithm changes improved the quality of its search results, 8,157 side-by-side experiments where it presented two sets of search results to a panel of human testers and had the evaluators rank which set of results was better, and 2,800 click evaluations to see how a small sample of real-life Google users responded to the change. Ultimately, the process resulted in 516 changes that were determined to be useful to users based on the data and, therefore, were made to Google’s algorithm. Most of these changes are imperceptible to users and affect a very small percentage of websites, but each one of them is implemented only if we believe the change will benefit our users.

That works out to a 96 per cent failure rate for proposed changes.

The key take-away here is that failure will occur. Inevitably. The difference between Google and most other companies is that Google’s data-driven culture allows them to learn from their mistakes. Notice as well the key word in Schmidt’s testimony: experiments. Experimentation is how Google—and Apple, Netflix, Amazon, and other leading technology companies—have managed to benefit from AI at scale.

A company’s ability to create and refine its processes, products, customer experiences, and business models is directly related to its ability to experiment.

What next?

Much like the industrial revolution swept away companies that failed to adopt machine manufacturing over hand-crafted products, the AI and machine learning sea change will wipe out companies that fail to adapt to the new environment. Although it’s tempting to think the challenges of AI are primarily technical, and to blame failures on technology, the reality is that most failures of AI projects are failures in strategy and in execution.

In many ways, this is good news for companies. The “old fashioned” business challenges behind the failures of AI projects are well understood. While you can’t avoid the necessary changes in culture, organisational structure, and business processes, some comfort can be taken in knowing that the routes have been charted; the challenge is in steering the ship and avoiding the rocks. Starting with small, simple experiments in applying AI to existing processes will help to you gain valuable experience before embarking on longer AI journeys.


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Featured

Slideshows

Show Comments