Menu
'Black box' no more: This system can spot the bias in those algorithms

'Black box' no more: This system can spot the bias in those algorithms

Why was your loan denied? A university's technique can shed some light

Between recent controversies over Facebook's Trending Topics feature and the U.S. legal system's "risk assessment" scores in dealing with criminal defendants, there's probably never been broader interest in the mysterious algorithms that are making decisions about our lives.

That mystery may not last much longer. Researchers from Carnegie Mellon University announced this week that they've developed a method to help uncover the biases that can be encoded in those decision-making tools.

Machine learning algorithms don't just drive the personal recommendations we see on Netflix or Amazon. Increasingly, they play a key role in decisions about credit, healthcare, and job opportunities, among other things.

So far, they've remained largely obscure, prompting increasingly vocal calls for what's known as algorithmic transparency, or the opening up of the rules driving that decision-making.

Some companies have begun to provide transparency reports in an attempt to shed some light on the matter. Such reports can be generated in response to a particular incident -- why an individual's loan application was rejected, for instance. They could also be used proactively by an organization to see if an artificial intelligence system is working as desired, or by a regulatory agency to see whether a decision-making system is discriminatory.

But work on the computational foundations of such reports has been limited, according to Anupam Datta, CMU associate professor of computer science and electrical and computer engineering. "Our goal was to develop measures of the degree of influence of each factor considered by a system," Datta said.

CMU's Quantitative Input Influence (QII) measures can reveal the relative weight of each factor in an algorithm's final decision, Datta said, leading to much better transparency than has been previously possible. A paper describing the work was presented this week at the IEEE Symposium on Security and Privacy.

Here's an example of a situation where an algorithm's decision-making can be obscure: hiring for a job where the ability to lift heavy weights is an important factor. That factor is positively correlated with getting hired, but it's also positively correlated with gender. The question is, which factor -- gender or weight-lifting ability -- is the company using to make its hiring decisions? The answer has substantive implications for determining if it is engaging in discrimination.

To answer the question, CMU's system keeps weight-lifting ability fixed while allowing gender to vary, thus uncovering any gender-based biases in the decision-making. QII measures also quantify the joint influence of a set of inputs on an outcome -- age and income, for instance -- and the marginal influence of each.

"To get a sense of these influence measures, consider the U.S. presidential election," said Yair Zick, a post-doctoral researcher in CMU's computer science department. "California and Texas have influence because they have many voters, whereas Pennsylvania and Ohio have power because they are often swing states. The influence aggregation measures we employ account for both kinds of power."

The researchers tested their approach against some standard machine-learning algorithms that they used to train decision-making systems on real data sets. They found that QII provided better explanations than standard associative measures for a host of scenarios, including predictive policing and income estimation.

Next, they're hoping to collaborate with industrial partners so that they can employ QII at scale on operational machine-learning systems.


Follow Us

Join the New Zealand Reseller News newsletter!

Error: Please check your email address.

Featured

Slideshows

Sizing up the NZ security spectrum - Where's the channel sweet spot?

Sizing up the NZ security spectrum - Where's the channel sweet spot?

From new extortion schemes, outside threats and rising cyber attacks, the art of securing the enterprise has seldom been so complex or challenging. With distance no longer a viable defence, Kiwi businesses are fighting to stay ahead of the security curve. In total, 28 per cent of local businesses faced a cyber attack last year, with the number in New Zealand set to rise in 2017. Yet amidst the sensationalism, media headlines and ongoing high profile breaches, confusion floods the channel, as partners seek strategic methods to combat rising sophistication from attackers. In sizing up the security spectrum, this Reseller News roundtable - in association with F5 Networks, Kaspersky Lab, Tech Data, Sophos and SonicWall - assessed where the channel sweet spot is within the New Zealand channel. Photos by Maria Stefina.

Sizing up the NZ security spectrum - Where's the channel sweet spot?
Kiwi channel comes together for another round of After Hours

Kiwi channel comes together for another round of After Hours

The channel came together for another round of After Hours, with a bumper crowd of distributors, vendors and partners descending on The Jefferson in Auckland. Photos by Maria Stefina.​

Kiwi channel comes together for another round of After Hours
Consegna comes to town with AWS cloud offerings launch in Auckland

Consegna comes to town with AWS cloud offerings launch in Auckland

Emerging start-up Consegna has officially launched its cloud offerings in the New Zealand market, through a kick-off event held at Seafarers Building in Auckland.​ Founded in June 2016, the Auckland-based business is backed by AWS and supported by a global team of cloud specialists, leveraging global managed services partnerships with Rackspace locally.

Consegna comes to town with AWS cloud offerings launch in Auckland
Show Comments