Algorithm Discrimination and PROTECT
By Katharina Miller*
Source: iStock by Getty Images
In their recent publication “Algorithmic discrimination in Europe. Challenges and opportunities for gender equality and non-discrimination law”, Raphaële Xenidis and Professor Janneke Gerards from Utrecht University propose a specific approach to developing solutions to algorithm discrimination and creating opportunities for improving equality through technology. Their Prevent, Redress, Open, Train, Explain, Control, Test (“PROTECT”) approach is an integrated framework to address the issue of algorithmic discrimination with the ambition to bring together the different possible tools, instruments, solutions and good practices. The steps are as follows:
Prevent: diverse and well-trained IT teams, equality impact assessments, ex ante “equality by design” or “legality by design” strategies.
Redress: combining different legal tools in non-discrimination law, data protection law etc. to foster clear attribution of legal responsibilities, clear remedies, fair rules of evidence, flexible and responsive interpretation and application of non-discrimination concepts.
Open: fostering transparency, e.g. through open data requirements for monitoring purposes (e.g. access to source codes).
Train: educating, creating, and disseminating knowledge on non-discrimination and equality issues among IT specialists; raising awareness about issues of algorithmic discrimination with regulators, judges, recruiters, officials, society at large.
Explain: explainability, accountability and information requirements.
Control: active human involvement (human-centred artificial intelligence), e.g. in the form of human-in-the-loop (HITL) systems designed to avoid rubber-stamping, complemented by supervision and consultation mechanisms (chain of control and consultation with users).
Test: continuous monitoring of algorithms and their output, setting up auditing, labelling and certification mechanisms.
According to Gerards and Xenidis, the prevention of algorithmic discrimination can be achieved through integrating various legal, knowledge-based, and technological measures. These measures include diverse professional communities that design and train algorithms, and the deployment of strategies called “equality by design” that offer guidance on the equality law framework to computer and data scientists. Assessments of equality and gender impact, which aimed to mainstream equality in algorithmic design, are also introduced. According to the authors, such prevention strategies can only be effective if two important prerequisites are met.
First, it is crucial to train and disseminate knowledge about the inequality challenges among society in general. This means IT professionals should be educated in gender equality and non-discrimination law in the same way as medical professionals receive ethics training. On the other hand, equality law professionals (practitioners, civil servants, judges, regulators, equality bodies, etc.), as well as citizens and public and private users of artificial intelligence (AI) tools, should be informed of the discriminatory risks linked to the use of AI and of existing debiasing strategies.
Second, IT professionals should pay close attention to the transparency and explainability of algorithms. The same goes for the availability of open and clean data: IT professionals and all stakeholders that are part of the creation of AI tools should try to work only with open and clean data. They are key for training and control purposes for the prevention strategies.
According to Gerards and Xenidis, constant monitoring of the AI tools is important to curb algorithmic discrimination. There should be testing mechanisms put in place to audit algorithms, particularly high-impact ones. Another option proposed by Gerards and Xenidis could be certification strategies by tech companies in order to guarantee that the algorithms they design and sell are not discriminatory. To my knowledge, no such certification strategy yet exists. The commitment by tech companies is especially crucial within the monitoring strategies which will have to improve the transparency, accountability and explainability of algorithms. The authors think that, “in line with the second dimension of the black box metaphor, the new horizons opened up by algorithmic technologies should be turned into opportunities to better detect and correct discrimination.”
Gerards and Xenidis think that human control plays a vital role in this integrated approach to algorithmic discrimination. When creating AI tools, there should be public collective supervision as well as individual human supervision, combined with a clear allocation of liability and legal responsibility, to foster active human control over decisions relying on algorithmic recommendations or predictions. The authors hope that such elements will discourage rubber-stamping and offset automation biases.
Finally, Gerards and Xenidis address a very important aspect of EU equality law and algorithm discrimination. They advocate for the legal redress that should be made available in the areas where it is lacking. As I discussed in an earlier Leading with AI article, the problem of algorithmic discrimination increases the weaknesses of EU equality law, such as intersectionality. Addressing algorithmic discrimination will mean reconsidering the gaps in the material scope of EU gender equality and non-discrimination.
In their final statement of their above-mentioned publication, the authors state, “Adapting and revisiting some of the core concepts of the EU equality doctrine will also be necessary in order to accommodate the changing nature of discrimination. Legal redress will have to be transversal and integrate gender equality and non-discrimination law with other legal areas, not least data protection law.”
I agree with the authors’ statement. As an example, some European Union member states prohibit the usage of sex-disaggregated data of their employees due to supposed data protection issues. However, without sex-disaggregating data of employees, it is nearly impossible to address the gender pay gap, which will then be concreted by AI tools (as described in my article on “Algorithm and bias in employment”). In order to address these challenges successfully and to ensure effective redress against algorithmic discrimination, it will be crucial that all relevant institutions cooperated in a proactive manner.
As a contributor to the “Leading with AI” newsletter, I will try to do my bit to learn and inform myself, other equality law professionals, and citizens (not only EU citizens, but all citizens worldwide) of the discriminatory risks linked to the use of AI and of existing debiasing strategies.
*Katharina Miller is a change agent with legal tools for ethics and integrity in innovation and technology. She is also a European Commission Reviewer and Ethics Expert. She is co-editor of the book "The Fourth Industrial Revolution and its Impact on Ethics - Solving the Challenges of the Agenda 2030" and co-lead of the working group “Digital Equality” of the Berkeley Center on Comparative Equality and Anti-Discrimination Law of the Berkeley Law School.