Stats New Zealand chief data steward Liz MacPherson and Privacy Commissioner John Edwards have joined forces to help guide government thinking on the use of data analytics and algorithmic decision-making.
Backing public statements by Edwards questioning data programs at the Ministry of Social Development and elsewhere, the pair have jointly developed six key principles to support the safe and effective data analytics.
"Guidance, oversight, and transparency are essential to fostering trust, confidence, and integrity around the use of data the government holds on behalf of New Zealanders," the guidance states. "It’s important for Kiwis to understand how their personal data is used."
The six principles are intended to help agencies planning to use data analytics and algorithmic decision-making to deliver "stronger, more secure, and safer data use".
The single page guide asks agencies firstly to ensure programs deliver clear public benefits and that they have been evaluated for fairness and have a "solid grounding in law".
Agencies should also ensure data used is "fit for purpose".
"Decision-makers need to be aware of how data is collected and analysed, including the accuracy, precision, consistency, and completeness of data quality, and take special care when re-using data that was originally collected for another purpose," the guidance adds.
Thirdly, agencies should keep in mind that people are behind the statistics and protect them from misuse of data and breaches of privacy.
Transparency is also essential to support accountability and collaboration.
"This includes ensuring New Zealanders know what data is held about them; how it’s kept secure; who has access to it; and how it’s used," the guidance says.
"Consultation with stakeholders and Māori as partners ensures manaakitanga (data users show mutual respect), and kaitiakitanga (New Zealanders are mindful of their responsibilities and the communities they source data from), by making sure all data uses are managed in a highly trusted, inclusive, and protected way."
Data use and analytical processes should be well documented in line with relevant legislation and state sector guidelines.
"Explanations of decisions – and the analytical activities behind them – should be in clear, simple, easy-to understand language."
The final two principles require agencies to understand the limitations of data and to maintain human oversight, especially over programs using algorithmic decision making.
"Ensure significant decisions based on data involve human judgement and evaluation, and that automated decision-making processes are regularly reviewed to make sure they’re still fit for purpose," the guidance says.
"Decision-makers should approach analytical tools with an appropriate awareness of limitations of data quality and other sources of error."
Edwards has said the pressure to look to technology to provide answers to complex social problems is increasing, and is supported by consultancies, data scientists and software vendors. But government has broad responsibilities about how its data is used.
Edwards has indicated his admiration for aspects of the EU's new General Data Protection Regulation (GDPR), for instance, that mandate the right to human review of automated decisions.
Under those rules, the controller of data used to make the decision will have a duty to safeguard the rights, freedoms and interests of the subjects of those decisions, at least to the point of being able to contest them.
"These protections are significant in the context of international benchmark setting," Edwards said. "They are an influential signpost to future regulatory settings on automated decision making for greater transparency," he said.
"I’m going to draw them to Parliament’s attention when we make our submission on the Privacy Bill and suggest that they might make a useful addition to the regulatory framework here, now that we have that rare opportunity."