Menu
'Black box' no more: This system can spot the bias in those algorithms

'Black box' no more: This system can spot the bias in those algorithms

Why was your loan denied? A university's technique can shed some light

Between recent controversies over Facebook's Trending Topics feature and the U.S. legal system's "risk assessment" scores in dealing with criminal defendants, there's probably never been broader interest in the mysterious algorithms that are making decisions about our lives.

That mystery may not last much longer. Researchers from Carnegie Mellon University announced this week that they've developed a method to help uncover the biases that can be encoded in those decision-making tools.

Machine learning algorithms don't just drive the personal recommendations we see on Netflix or Amazon. Increasingly, they play a key role in decisions about credit, healthcare, and job opportunities, among other things.

So far, they've remained largely obscure, prompting increasingly vocal calls for what's known as algorithmic transparency, or the opening up of the rules driving that decision-making.

Some companies have begun to provide transparency reports in an attempt to shed some light on the matter. Such reports can be generated in response to a particular incident -- why an individual's loan application was rejected, for instance. They could also be used proactively by an organization to see if an artificial intelligence system is working as desired, or by a regulatory agency to see whether a decision-making system is discriminatory.

But work on the computational foundations of such reports has been limited, according to Anupam Datta, CMU associate professor of computer science and electrical and computer engineering. "Our goal was to develop measures of the degree of influence of each factor considered by a system," Datta said.

CMU's Quantitative Input Influence (QII) measures can reveal the relative weight of each factor in an algorithm's final decision, Datta said, leading to much better transparency than has been previously possible. A paper describing the work was presented this week at the IEEE Symposium on Security and Privacy.

Here's an example of a situation where an algorithm's decision-making can be obscure: hiring for a job where the ability to lift heavy weights is an important factor. That factor is positively correlated with getting hired, but it's also positively correlated with gender. The question is, which factor -- gender or weight-lifting ability -- is the company using to make its hiring decisions? The answer has substantive implications for determining if it is engaging in discrimination.

To answer the question, CMU's system keeps weight-lifting ability fixed while allowing gender to vary, thus uncovering any gender-based biases in the decision-making. QII measures also quantify the joint influence of a set of inputs on an outcome -- age and income, for instance -- and the marginal influence of each.

"To get a sense of these influence measures, consider the U.S. presidential election," said Yair Zick, a post-doctoral researcher in CMU's computer science department. "California and Texas have influence because they have many voters, whereas Pennsylvania and Ohio have power because they are often swing states. The influence aggregation measures we employ account for both kinds of power."

The researchers tested their approach against some standard machine-learning algorithms that they used to train decision-making systems on real data sets. They found that QII provided better explanations than standard associative measures for a host of scenarios, including predictive policing and income estimation.

Next, they're hoping to collaborate with industrial partners so that they can employ QII at scale on operational machine-learning systems.


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Featured

Slideshows

The making of an MSSP: a blueprint for growth in NZ

The making of an MSSP: a blueprint for growth in NZ

Partners are actively building out security practices and services to match, yet remain challenged by a lack of guidance in the market. This exclusive Reseller News Roundtable - in association with Sophos - assessed the making of an MSSP, outlining the blueprint for growth and how partners can differentiate in New Zealand.

The making of an MSSP: a blueprint for growth in NZ
Reseller News Platinum Club celebrates leading partners in 2018

Reseller News Platinum Club celebrates leading partners in 2018

The leading players of the New Zealand channel came together to celebrate a year of achievement at the inaugural Reseller News Platinum Club lunch in Auckland. Following the Reseller News Innovation Awards, Platinum Club provides a platform to showcase the top performing partners and start-ups of the past 12 months, with more than ​​50 organisations in the spotlight.​​​

Reseller News Platinum Club celebrates leading partners in 2018
Meet the top performing HP partners in NZ

Meet the top performing HP partners in NZ

HP has honoured its leading partners in New Zealand during 2018, following 12 months of growth through the local channel. Unveiled during the fourth running of the ceremony in Auckland, the awards recognise and celebrate excellence, growth, consistency and engagement of standout Kiwi partners.

Meet the top performing HP partners in NZ
Show Comments