Back
Can justice be blind when it comes to machine learning? Researchers from The Alan Turing Institute, the Max Planck Institute for Intelligent Systems and Software Systems, the University of Cambridge and Warwick as well as from the University College London present findings at ICML 2018.
Stockholm – Concerns are rising that machine learning systems that make or support important decisions and judgments affecting individuals – e.g. by assessing the likelihood of criminals to reoffend, or by deciding which resumes should get filtered out during a job selection process – unfairly discriminate against certain groups. For instance there is evidence that COMPAS, an algorithmic tool used in some US jurisdictions to predict reoffending, may be biased, or may be no more accurate or fair than predictions made by people with little or no criminal justice expertise.
A research publication with the title “Blind Justice: Fairness with Encrypted Sensitive Attributes” by Niki Kilbertus, Adrià Gascón, Matt J. Kusner, Michael Veale, Krishna P. Gummadi and Adrian Weller addresses the issue. The scientists from the Max Planck Institutes for Intelligent Systems and Software Systems, as well as from The Alan Turing Institute, the University of Cambridge and Warwick, and the University College London (UCL) was presented on Friday 13th July at the 35th International Conference on Machine Learning (ICML) in Stockholm. There, the researchers introduce a new take on machine learning fairness. The authors propose new methods to help regulators provide better oversight, practitioners to develop fair and privacy-preserving data analyses, and users to retain control over data they consider highly sensitive. By encrypting sensitive attributes, an outcome-based fair model may be learned, checked, or have its outputs verified and held to account, without users revealing their sensitive attributes in the clear.
Earlier work explored how to train machine learning models which are fair for particular subgroups of the population, such as gender or race (called ‘sensitive attributes’). To do so, these methods aim to avoid certain criteria such as disparate impact and disparate treatment. To avoid disparate treatment, sensitive attributes should not be considered. On the other hand, in order to avoid disparate impact, sensitive attributes must be examined – e.g., in order to learn a fair model, or to check if a given model is fair. The authors introduce methods from secure multi-party computation which avoid both.
“The field of fair learning has suffered from a dilemma”, lead author Niki Kilbertus explains. “To enforce fairness, sensitive attributes must be examined; yet in many situations, users may feel uncomfortable in revealing these data, or companies may be legally restricted in collecting and utilising them, especially with the advent of the General Data Protection Regulation. In this work we present a way to address this dilemma: by extending methods from secure multi party computation, we enable a fair model to be learned or verified without users revealing their sensitive attributes.”
Turing Research Fellow Adrià Gascón adds: “Recent developments in cryptography, and more concretely secure multi party computation, are opening exciting avenues for regulators, as they can now audit and oversee sensitive information in a way that was never before possible. And while issues of social fairness are complex, we have put forward this approach as one tool which may be useful to mitigate certain concerns around machine learning and society.”
The authors work in a multidisciplinary fashion to connect concerns in privacy, security, algorithmic fairness, and accountability to help mitigate concerns around machine learning and society. This research is part of a larger body of work being explored at The Alan Turing Institute to address the challenge to make algorithmic systems fair, transparent and ethical. It is their aspiration to design and deliver ethical approaches to data science and Artificial Intelligence (AI) by bringing together cutting edge technical skills with expertise in ethics, law, social science and policy. These themes relate to our Data Ethics Group, and our Fairness, Transparency and Privacy group, and connect to our efforts to manage security in an insecure world.
Notes to Editors:
Blind justice: Fairness with encrypted sensitive attributes will be published at Proceedings of the 35th International Conference on Machine Learning (ICML), Stockholm, Sweden and the project idea was born at The Alan Turing Institute.
The authors are Niki Kilbertus (Max Planck Institute for Intelligent Systems and University of Cambridge), Adrià Gascón (The Alan Turing Institute and University of Warwick), Matt Kusner (The Alan Turing Institute and University of Warwick), Michael Veale (University College London), Krishna P. Gummadi (Max Planck Institute for Software Systems), Adrian Weller (The Alan Turing Institute and University of Cambridge).