In 2017, The Economist declared that data, rather than oil, had become the world's most valuable resource. The refrain has been repeated ever since. Organisations across every industry have been and continue to invest heavily in data and analytics. But like oil, data and analytics have their dark side.
According to IDG's State of the CIO 2020 report, 37 per cent of IT leaders say that data analytics will drive the most IT investment at their organisation this year. Insights gained from analytics and actions driven by machine learning algorithms can give organisations a competitive advantage, but mistakes can be costly in terms of reputation, revenue, or even lives.
Understanding your data and what it's telling you is important, but it's also important to understand your tools, know your data, and keep your organisation's values firmly in mind.
Here are a handful of high-profile analytics and AI blunders from the past decade to illustrate what can go wrong.
1 - UK lost thousands of Covid-19 cases by exceeding spreadsheet data limit
In October 2020, Public Health England (PHE), the UK government body responsible for tallying new Covid-19 infections, revealed that nearly 16,000 coronavirus cases went unreported between 25 September and 2 October. The culprit? Data limitations in Microsoft Excel.
PHE uses an automated process to transfer Covid-19 positive lab results as a CSV file into Excel templates used by reporting dashboards and for contact tracing. Unfortunately, Excel spreadsheets can have a maximum of 1,048,576 rows and 16,384 columns per worksheet.
Moreover, PHE was listing cases in columns rather than rows. When the cases exceeded the 16,384-column limit, Excel cut off the 15,841 records at the bottom.
The "glitch" didn't prevent individuals who got tested from receiving their results, but it did stymie contact tracing efforts, making it harder for the UK National Health Service (NHS) to identify and notify individuals who were in close contact with infected patients.
In a statement on 4 October, Michael Brodie, interim chief executive of PHE, said NHS Test and Trace and PHE resolved the issue quickly and transferred all outstanding cases immediately into the NHS Test and Trace contact tracing system.
PHE put in place a "rapid mitigation" that splits large files and has conducted a full end-to-end review of all systems to prevent similar incidents in the future.
2 - Healthcare algorithm failed to flag black patients
In 2019, a study published in Science revealed that a healthcare prediction algorithm, used by hospitals and insurance companies throughout the US to identify patients to in need of "high-risk care management" programs, was far less likely to single out black patients.
High-risk care management programs provide trained nursing staff and primary-care monitoring to chronically ill patients in an effort to prevent serious complications. But the algorithm was much more likely to recommend white patients for these programs than black patients.
The study found that the algorithm used healthcare spending as a proxy for determining an individual's healthcare need. But according to Scientific American, the healthcare costs of sicker black patients were on par with the costs of healthier white people, which meant they received lower risk scores even when their need was greater.
The study's researchers suggested that a few factors may have contributed. First, people of colour are more likely to have lower incomes, which, even when insured, may make them less likely to access medical care. Implicit bias may also cause people of colour to receive lower-quality care.
While the study did not name the algorithm or the developer, the researchers told Scientific American they were working with the developer to address the situation.
3 - Dataset trained Microsoft chatbot to spew racist tweets
In March 2016, Microsoft learned that using Twitter interactions as training data for machine learning algorithms can have dismaying results.
Microsoft released Tay, an AI chatbot, on the social media platform. The company described it as an experiment in "conversational understanding."
The idea was the chatbot would assume the persona of a teen girl and interact with individuals via Twitter using a combination of machine learning and natural language processing. Microsoft seeded it with anonymised public data and some material pre-written by comedians, then set it loose to learn and evolve from its interactions on the social network.
Within 16 hours, the chatbot posted more than 95,000 tweets, and those tweets rapidly turned overtly racist, misogynist, and anti-Semitic. Microsoft quickly suspended the service for adjustments and ultimately pulled the plug.
"We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," Peter Lee, corporate vice president, Microsoft Research & Incubations (then corporate vice president of Microsoft Healthcare), wrote in a post on Microsoft's official blog following the incident.
Lee noted that Tay's predecessor, Xiaoice, released by Microsoft in China in 2014, had successfully had conversations with more than 40 million people in the two years prior to Tay's release.
What Microsoft didn't take into account was that a group of Twitter users would immediately begin tweeting racist and misogynist comments to Tay. The bot quickly learned from that material and incorporated it into its own tweets.
"Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images," Lee wrote.
4 - Amazon AI-enabled recruitment tool only recommended men
Like many large companies, Amazon is hungry for tools that can help its HR function screen applications for the best candidates. In 2014, Amazon started working on AI-powered recruiting software to do just that. There was only one problem: The system vastly preferred male candidates. In 2018, Reuters broke the news that Amazon had scrapped the project.
Amazon's system gave candidates star ratings from 1 to 5. But the machine learning models at the heart of the system were trained on 10 years' worth of resumes submitted to Amazon — most of them from men.
As a result of that training data, the system started penalising phrases in the resume that included the word "women's" and even downgraded candidates from all-women colleges.
At the time, Amazon said the tool was never used by Amazon recruiters to evaluate candidates.
The company tried to edit the tool to make it neutral, but ultimately decided it could not guarantee it would not learn some other discriminatory way of sorting candidates and ended the project.
5 - Target analytics violated privacy
In 2012, an analytics project by retail titan Target showcased how much companies can learn about customers from their data. According to the New York Times, in 2002 Target's marketing department started wondering how it could determine whether customers are pregnant.
That line of inquiry led to a predictive analytics project that would famously lead the retailer to inadvertently reveal to a teenage girl's family that she was pregnant. That, in turn, would lead to all manner of articles and marketing blogs citing the incident as part of advice for avoiding the "creepy factor."
Target's marketing department wanted to identify pregnant individuals because there are certain periods in life — pregnancy foremost among them — when people are most likely to radically change their buying habits.
If Target could reach out to customers in that period, it could, for instance, cultivate new behaviours in those customers, getting them to turn to Target for groceries or clothing or other goods.
Like all other big retailers, Target had been collecting data on its customers via shopper codes, credit cards, surveys, and more. It mashed that data up with demographic data and third-party data it purchased.
Crunching all that data enabled Target's analytics team to determine that there were about 25 products sold by Target that could be analysed together to generate a "pregnancy prediction" score. The marketing department could then target high-scoring customers with coupons and marketing messages.
Additional research would reveal that studying customers' reproductive status could feel creepy to some of those customers.
According to the Times, the company didn't back away from its targeted marketing, but did start mixing in ads for things they knew pregnant women wouldn't buy — including ads for lawn mowers next to ads for diapers — to make the ad mix feel random to the customer.