
Discover more from Leading with AI
Algorithmic Discrimination - Equality in the Digital Era: AI and Anti-discrimination Law in Europe
By Katharina Miller*
In February, the Leading with AI team organised its first event on “Equality in the digital era: AI and anti-discrimination law in Europe”. Our speaker was Raphaële Xenidis, who is lecturer in EU law at the Edinburgh University School of Law and Marie Curie Fellow, iCourts, at the University of Copenhagen. The event focused on algorithmic discrimination, with a particular emphasis on the European approach to regulate the situation.
This article gives an overview of topics addressed during the event.
An algorithm is a set of computer instructions used for problem-solving purposes which produces a value output based on input data. It is based on rules (if a condition A is met, an outcome B should follow) and on machine learning, i.e. to autonomously adapt, evolve, and improve to optimise any given outcome based on any input data without being explicitly programmed to do so. The bias gets into the equation or algorithm because the human being who sets the computer instructions brings in her or his own bias, and this can happen consciously or unconsciously.
In this context, algorithmic bias is a systematic error in the outcome of algorithm operations. The overall ethical standard that has been agreed on in order to avoid algorithm bias is fairness. And fairness means a set of procedures aiming to avoid bias so as to ensure outcomes that respect ethical standards such as acknowledgement of human agency, privacy and data governance, individual, social and environmental wellbeing, transparency and accountability, and oversight.
There are many examples of algorithmic discrimination. For example, facial recognition applications perform much worse at recognising black women’s faces than white men. Similarly, a Google search on “professional” hair show mostly pictures of white women while a search for “unprofessional” hair displays predominantly pictures of black women.
Another example is Microsoft’s AI chatbot @TayandYou, which Microsoft described as an experiment in "conversational understanding." Microsoft launched its AI chatbot in Twitter, and it took less than 24 hours for Twitter to corrupt this AI chatbot. A senior reporter for The Verge, James Vincent, described the discrimination and stated that shortly after the AI chatbot’s launch people started tweeting the bot with all sorts of misogynistic and racist remarks. As a consequence and because the AI chatbot was essentially a robot parrot with an internet connection, “it started repeating these sentiments back to users, proving correct that old programming adage: flaming garbage pile in, flaming garbage pile out.”
The origins and causes of algorithmic discrimination reflect existing discrimination in our offline, real world where humans discriminate against each other. It starts with our own human stereotypes that have led to discrimination in the past (such as men are strong and women are weak, or assumptions about racial stereotypes). The consequences are structural inequalities. Stereotypes and biased conduct enter—consciously or unconsciously— into the design of an algorithm. This leads to the generation of biased data, such as in the cases explained before.
If societies want to avoid repeating the same patterns of bias and discrimination that we witness in the ‘physical world’, algorithmic discrimination needs to be addressed. Otherwise, we risk creating a digital world that replicates structural inequalities.
First, there are scope-related shortcomings. For example, there are gaps related to online discrimination of consumers beyond gender and race. While there is protection against algorithmic discrimination in the media, advertising and education, there is no protection so far for algorithmic discrimination against the digital gender pay gap among platform workers who provide a service in return for money (e.g. individuals who use an app, such as Uber, or a website, such as Amazon Mechanical Turk, to match themselves with customers).
Miriam Kullmann, a researcher at WU Vienna University of Economics and Business and Harvard University Weatherhead Centre for International Affairs, described algorithmic discrimination of platform workers in her article from 2018, “Some female platform workers receive lower pay than their male counterparts”. Kullmann further describes that some online platforms use algorithms to determine pay levels. The key question which should be addressed here is:
to the extent to which current EU gender equality law, and the principle of equal pay for women and men in particular, is adequate for protecting platform workers in a situation where work-related decisions are not taken by a human being but by an algorithm that is the potential source of discrimination.
Furthermore, there are some conceptual and doctrinal frictions (in EU legislation) such as, for example, intersectionality. According to Oxford English dictionary, intersectionality is “the interconnected nature of social categorizations such as race, class, and gender, regarded as creating overlapping and interdependent systems of discrimination or disadvantage.” The protected grounds of discrimination reflect a “single-axis”-model and do not protect against the granularity of profiling or subcategorization, which can also lead to the invisibility of many people. In other words, the aforementioned example of algorithmic discrimination in facial recognition ignores intersectionality.
There are also procedural difficulties in establishing proof of discrimination with the lack of transparency of 'black box' algorithms, lack of explainability obligations, and the opacity of proprietary algorithms. The AI black box problem refers to the inability to fully understand why the algorithms behind the AI work the way they do. Further procedural difficulties include allocating responsibility within fragmented chain of actors and complex human-machine interactions and attributing liability across multiple legal regimes in complex and composite AI systems.
In her conclusion, Raphaële Xenidis spoke about the solutions to algorithmic discrimination and opportunities for improving equality through technology. She explained the Prevent, Redress, Open, Train, Explain, Control, Test (“PROTECT”) approach and discussed how technology opens new possibilities such as the detection of discrimination in algorithmic and human decision-making (e.g., with the auditing of algorithms). This is a process of analysing and processing data, and understanding how algorithm developers are making decisions and where all the data is actually coming from.
There might be a potential increase in replicability and accuracy of decision-making, and finally debiaising strategies could be implemented in machine-based: e.g. bias minimisation and mitigation techniques. There are some EU funded project that promote “Ethics by Design” when creating algorithm, such as SHERPA or SIENNA. However, this discussion has just started and there is a long way to mainstream PROTECT or the algorithm audit. Within “Leading with AI” we shall accompany these discussions and continue writing about this topic.
*Katharina Miller is a change agent with legal tools for ethics and integrity in innovation and technology. She is also a European Commission Reviewer and Ethics Expert. She is co-editor of the book "The Fourth Industrial Revolution and its Impact on Ethics - Solving the Challenges of the Agenda 2030" and co-lead of the working group “Digital Equality” of the Berkeley Center on Comparative Equality and Anti-Discrimination Law of the Berkeley Law School.