Algorithms and Bias in Employment
By Katharina Miller*
Photo by Possessed Photography on Unsplash
On 30th April, the California Fair Employment and Housing Council organised a public hearing on algorithms and bias. One aspect of the six-hour hearing was dedicated to algorithmic bias in the workplace. The council is part of the Department of Fair Employment and Housing (DFEH), the state agency responsible for enforcing California’s civil rights laws. This post is a report on the first speech of one of the invited experts, Aaron Rieke, on algorithms and bias in the workplace. It is important to mention that the hiring process, even without the use of artificial intelligence (AI) technologies, can already be a biased process. This article describes the hiring process “on-site” and compares it with the AI-based solutions.
Any hiring funnel, with or without the use of AI, starts with the “thinking” of a new position and about which person could fit the position. Normally, employers have an idea of their talent needs and the profile of the person that should fit that need. Biases can already creep in at this very early stage. If an employee has to be replaced, the employer is likely to search for a person with the same profile. If a new position needs to be created, the employer might already have an idea of the profile of the future employees.
The first step of the hiring funnel is sourcing candidates: when employers try to attract potential candidates to apply for open positions through advertisements, job postings, and individual outreach. During this process, if employers are not careful, they could be committing unlawful discrimination. For example, rather than an employer advertising for a “waitress” they should advertise for “waiting staff” or “waiter or waitress”. Rieke and his team checked some AI-based technologies which claim to avoid some of these biases and discrimination, such as applications that help employers create job descriptions. These applications are designed to reach more applicants and encourage a larger and more diverse talent pool, focusing specially on gender diversity. Rieke and his team conclude that such tools, while far from perfect, do help employers make job descriptions more inclusive.
According to Rieke and his team, problems can also arise at the advertising stage. Many employers use paid digital advertising tools to disseminate job opportunities to a greater number of potential applicants. Another expert at the hearing, Pauline Kim, argued that, “not informing people of a job opportunity is a highly effective barrier to applying for that position.” Rieke concludes that, “the complexity and opacity of digital advertising tools make it difficult, if not impossible, for aggrieved jobseekers to spot discriminatory patterns of advertising in the first place.” He comes to a similar conclusion regarding matching tools stating that, “tools that rely on attenuated proxies for ‘relevance’ and ‘interest’ could end up replicating the very cognitive biases they claim to remove.”
During the screening stage, employers assess candidates—both before and after those candidates apply—by analysing their experiences, skills, and personalities. At this stage, employers can often judge candidates based on their own biases. For example, they might reject women aged 25-40 years old because they are of child rearing age. Rieke and his team also checked AI-based tools to support the screening stage. Emerging tools assess, score, and rank applicants according to their qualifications, soft skills, and other capabilities in order to help hiring managers decide who should move on to the next stage. These tools help employers quickly whittle down their applicant pool so they can spend more time considering the applicants deemed to be strongest. A substantial number of job applicants are automatically or summarily rejected during this stage. With regard to the screening stage, Rieke and his team conclude that the resulting model will very likely reflect prior interpersonal, institutional, and systemic biases when screening systems aim to replicate an employer’s prior hiring decisions. This means that these kind of screening tools are also highly biased. This was also the case for Amazon’s AI recruiting tool, a well-documented case, which showed the tool replicated institutional bias against women.
The interview process is an opportunity for employers to assess applicants in a more direct and individualised way. However, employers should avoid asking questions that make assumptions about the candidates based on their protected sphere such as their family plans, etc. There are AI-based tools that claim to measure applicants’ performance in video interviews by automatically analysing verbal responses, tone, and even facial expressions. In their research, Rieke and his team focused on one tool by the company HireVue. This tool lets employers solicit recorded interview answers from applicants, and then “grades” these responses against interview answers provided by current, successful employees. More specifically, HireVue’s tool parses videos using machine learning, extracting signals like facial expressions, eye contact, vocal indications of enthusiasm, word choice, word complexity, topics discussed, and word groupings. The use of tools such as HireVue raises questions on multiple fronts, particularly related to ethical aspects. Rieke and his team detected that speech recognition software could perform poorly, especially for people with regional and non-native accents. Additionally, facial analysis systems can struggle to recognise faces of women with darker skin. Furthermore, some interviewees might be rewarded for irrelevant or unfair factors, like exaggerated facial expressions, and penalised for visible disabilities or speech impediments. On the other hand, using these kinds of biometric data might not have a legal ground if the data are used to predict workplace success, to make or inform hiring decisions.
During the final process of the hiring funnel, the selection stage, employers make final hiring and compensation determinations. At this last stage, women “consistently submit lower wage bids than men do.” There are hiring tools that currently aim to predict whether candidates might violate workplace policies, or estimate what mix of salary and other benefits to offer. Rieke and his team worry that such tools might amplify pay gaps for women and workers of colour. As he stated, “human resource data commonly include ample proxies for a worker’s socioeconomic and racial status, which could be reflected in salary requirement predictions. In any case, offering employers highly specific insight into a candidate’s salary requirements increases information asymmetry between employers and candidates at a critical moment of negotiation.”
Rieke insists that all AI tools should be audited: quantitatively by using labelled demographic data to check outcomes and qualitatively by interrogating actual variables and job relatedness.
Rieke’s conclusion for policy makers is: do not fixate on AI. While personality tests and commonplace applicant tracking system features need scrutiny too, Rieke advises against focusing exclusively on statistical auditing, which works to the detriment of other forms of bias examinations. Rieke insists that outdated federal guidelines should be removed. Furthermore, Rieke encourages policy makers to require employers to show their anti-discrimination work and publish new standards and guidance that help scrutinize sourcing and recruiting practices.
Therefore, it is very important to work on the anti-discrimination in the real world in order to avoid its replication and “concreting” in the digital world. Because once algorithms are online and in use, it’s very difficult to stop their usage.
*Katharina Miller is a change agent with legal tools for ethics and integrity in innovation and technology. She is also a European Commission Reviewer and Ethics Expert. She is co-editor of the book "The Fourth Industrial Revolution and its Impact on Ethics - Solving the Challenges of the Agenda 2030" and co-lead of the working group “Digital Equality” of the Berkeley Center on Comparative Equality and Anti-Discrimination Law of the Berkeley Law School.
Save The Date: Wednesday, 26th May | 4pm CEST
Webinar: AI in Agriculture
Join us on Wednesday, 26th May at 4pm - 5pm CEST for our next Leading with AI Webinar on the topic of AI in Agriculture. Our speaker is Daniel Mutembesa, a research scientist at the Makerere Artificial Intelligence Lab in Uganda. His recent work includes large scale crop disease and pest sensing with smallholder farmers, social credit scoring, and deploying AI-based tools for in-field crop diagnosis to farmers around Uganda.
Subscribe to our newsletter to receive the details. Attendance is free.
Register here, via Eventbrite.
Speaker's Biography: Daniel Mutembesa is a research scientist and collaboration lead at the Makerere Artificial Intelligence Lab. He focuses on algorithmic game theory and mechanism design, behavioural and forecast modelling in crowdsourcing games, and applied artificial intelligence in the developing world.
His research covers algorithmic mechanism design of community sensing games for surveillance in agriculture and health, modelling participant behaviour in their unique low-resource settings, community graph networks, and machine learning models to forecast the risk burden of rural communities for diseases like malaria.
He is a recent grantee of the Facebook Mechanism Design for Social Good Research award.
We look forward to seeing you there to share the knowledge and lead with AI.