Menu
'Black box' no more: This system can spot the bias in those algorithms

'Black box' no more: This system can spot the bias in those algorithms

Why was your loan denied? A university's technique can shed some light

Between recent controversies over Facebook's Trending Topics feature and the U.S. legal system's "risk assessment" scores in dealing with criminal defendants, there's probably never been broader interest in the mysterious algorithms that are making decisions about our lives.

That mystery may not last much longer. Researchers from Carnegie Mellon University announced this week that they've developed a method to help uncover the biases that can be encoded in those decision-making tools.

Machine learning algorithms don't just drive the personal recommendations we see on Netflix or Amazon. Increasingly, they play a key role in decisions about credit, healthcare, and job opportunities, among other things.

So far, they've remained largely obscure, prompting increasingly vocal calls for what's known as algorithmic transparency, or the opening up of the rules driving that decision-making.

Some companies have begun to provide transparency reports in an attempt to shed some light on the matter. Such reports can be generated in response to a particular incident -- why an individual's loan application was rejected, for instance. They could also be used proactively by an organization to see if an artificial intelligence system is working as desired, or by a regulatory agency to see whether a decision-making system is discriminatory.

But work on the computational foundations of such reports has been limited, according to Anupam Datta, CMU associate professor of computer science and electrical and computer engineering. "Our goal was to develop measures of the degree of influence of each factor considered by a system," Datta said.

CMU's Quantitative Input Influence (QII) measures can reveal the relative weight of each factor in an algorithm's final decision, Datta said, leading to much better transparency than has been previously possible. A paper describing the work was presented this week at the IEEE Symposium on Security and Privacy.

Here's an example of a situation where an algorithm's decision-making can be obscure: hiring for a job where the ability to lift heavy weights is an important factor. That factor is positively correlated with getting hired, but it's also positively correlated with gender. The question is, which factor -- gender or weight-lifting ability -- is the company using to make its hiring decisions? The answer has substantive implications for determining if it is engaging in discrimination.

To answer the question, CMU's system keeps weight-lifting ability fixed while allowing gender to vary, thus uncovering any gender-based biases in the decision-making. QII measures also quantify the joint influence of a set of inputs on an outcome -- age and income, for instance -- and the marginal influence of each.

"To get a sense of these influence measures, consider the U.S. presidential election," said Yair Zick, a post-doctoral researcher in CMU's computer science department. "California and Texas have influence because they have many voters, whereas Pennsylvania and Ohio have power because they are often swing states. The influence aggregation measures we employ account for both kinds of power."

The researchers tested their approach against some standard machine-learning algorithms that they used to train decision-making systems on real data sets. They found that QII provided better explanations than standard associative measures for a host of scenarios, including predictive policing and income estimation.

Next, they're hoping to collaborate with industrial partners so that they can employ QII at scale on operational machine-learning systems.


Follow Us

Join the New Zealand Reseller News newsletter!

Error: Please check your email address.

Featured

Slideshows

Tight lines as Hooked on Lenovo catches up at Great Barrier Island

Tight lines as Hooked on Lenovo catches up at Great Barrier Island

​Ingram Micro’s Hooked on Lenovo incentive programme recently rewarded 28 of New Zealand's top performing resellers with a full-on fishing trip at Great Barrier Island for the third year​ in a row.

Tight lines as Hooked on Lenovo catches up at Great Barrier Island
Inside the AWS Summit in Sydney

Inside the AWS Summit in Sydney

As the dust settles on the 2017 AWS Summit in Sydney, ARN looks back an action packed two-day event, covering global keynote presentations, 80 breakout sessions on the latest technology solutions, and channel focused tracks involving local cloud stories and insights.

Inside the AWS Summit in Sydney
Channel tees off on the North Shore as Ingram Micro hosts annual Cure Kids Charity golf day

Channel tees off on the North Shore as Ingram Micro hosts annual Cure Kids Charity golf day

Ingram Micro hosted its third annual Cure Kids Charity Golf Tournament at the North Shore Golf Club in Auckland. In total, 131 resellers, vendors and Ingram Micro suppliers enjoyed a round of golf consisting of challenges on each of the 18 sponsored holes, with Team Philips taking out the top honours.

Channel tees off on the North Shore as Ingram Micro hosts annual Cure Kids Charity golf day
Show Comments