Does Africa have the telecommunications backbone to support Artificial Intelligence?

By Ernest Amoabeng Ortsin*

Source: Pixbay

In 2018 the International Telecommunications Union (ITU) issued a report which indicated that only 24.4% of Africans were Internet users, as compared to 51.2% of the global population who have Internet access. Although there has been a vast improvement in Internet penetration in Africa since 2005 (when only 2.1% had access to the Internet), it is still abysmally low when compared with the rest of the world.

In the global north, specifically in Europe and in the Americas, the ITU report further noted that Internet usage is about 79.6% and 69.6% respectively. In the Commonwealth of Independent States and in the Gulf region, the proportion of the population using Internet is 71.3% and 54.7% respectively, whereas in Asia and the Pacific regions it is about 47%.

The above data show clearly that Africa is trailing the rest of the world when it comes to Internet access and usage. Several factors account for this, but chief among them is the unavailability of adequate telecommunications infrastructure, including terrestrial optic fibre networks, submarine cables, satellite communication, mobile communication, digital terrestrial broadcasting, data centres, telecentres, and smart digital devices.

Indeed, according to the African Development Bank (ADB) the continent generally has a huge infrastructure deficit that requires investments of between US$130 and US$170 billion annually to resolve. These deficits are most pronounced in terms of transportation, education, health, telecommunications, etc.  Presently, the investment gap to meet these deficits is between US$52 and US$92 billion. Of the required investments, the information and communications technology (ICT) sector needs amount to US$4—7 billion.

It is significant to note that much of the funding for infrastructure in the ICT sector in Africa comes from foreign direct investment (rather than the local private sector or governments). This puts the continent at a disadvantage as foreign investors are determining which parts of the countries to invest in. Owing to this situation, more often than not, the investments centre around national capitals and other major towns (with viable economic activities), leaving the remote and rural areas underserved. It is therefore not surprising that, according to the African Union (AU), an estimated 300 million Africans live more than 50km from a fibre or cable broadband connection.

It is important to underscore that AI projects are cost-intensive. And, in that sense, the question of an infrastructural backbone to support AI in Africa needs deeper examination. For example, the rollout of “fifth generation” (5G) technology has been associated with AI in the sense that it will make it possible for large volumes of data (including images) to be transported at high speed, with high quality, over long distances. However, given that African countries have struggled for more than two decades to make available “third generation” (3G) and “fourth generation” (4G) technologies to their populations, there are concerns about how readily 5G technologies will become available in the African tech space? Already, it is reported that more than 60 countries have rolled out 5G technologies around the world. As at April 2021, only South Africa and Kenya had rolled out the technology in Africa.

A recent report by Research and Markets has noted that, “the global 5G infrastructure market will grow with a cumulative annual growth rate (CAGR) of 64.1% between 2019 and 2025, with an estimated value of US$1.9 billion in 2019. By the end of 2020, 5G global infrastructure cost is expected to hit US$2.7 trillion and investing in 5G infrastructure upgrades is estimated to cost around US$1 trillion.

Technology experts are excited about 5G because of its potential to revolutionize the digital space. Theoretically, it is expected to run at a peak speed of 20 Gbps compared with the current speed of 4G which is 1 Gbps. It is also expected to have lower latency (the rate at which digital images delay before they are sent) and this will consequently enhance mobile broadband telephony, mission-critical communications, and massive Internet of Things (IoT). According to Cisco Systems:

in healthcare, 5G technology and Wi-Fi connectivity will enable patients to be monitored via connected devices that constantly deliver data on key health indicators, such as heart rate and blood pressure. In the auto industry, 5G combined with machine learning-driven algorithms will provide information on traffic, accidents, and more; vehicles will be able to share information with other vehicles and entities on roadways, such as traffic lights.

In Africa, AI is mainly expected to be deployed in areas such as agriculture, health, education and finance. For example, in agriculture, AI is expected to be used in diagnosis of crop and animal diseases in order to boost food security. In health, it is anticipated that AI with will be used in the diagnosis of ailments to make up for the low doctor-patient ratio. In education, AI is expected to be used to augment classroom learning through online platforms to address the shortage of educators. And, finally, in the financial sector, AI is expected to increase financial inclusion through fintechs.

The AU recently launched a Digital Transformation Strategy For Africa (2020-2030) with the overall objective:

To harness digital technologies and innovation to transform African societies and economies to promote Africa's integration, generate inclusive economic growth, stimulate job creation, break the digital divide, and eradicate poverty for the continent’s socio-economic development and ensure Africa’s ownership of modern tools of digital management.

No doubt, AI has the potential of assisting African countries to achieve the objective of the digital transformation strategy. However, they need to, first of all, invest more into strengthening the telecommunications backbone of the continent.

Share

Ernest Amoabeng Ortsin is an Africa-based researcher with a growing interest in Artificial Intelligence policy research. He studied Political Science at the University of Ghana. He was a participant in the Leading with AI Lab and is a co-founding member of Leading with AI.

Algorithm discrimination and PROTECT

By Katharina Miller*

Source: iStock by Getty Images


In their recent publication “Algorithmic discrimination in Europe. Challenges and opportunities for gender equality and non-discrimination law”, Raphaële Xenidis and Professor Janneke Gerards from Utrecht University propose a specific approach to developing solutions to algorithm discrimination and creating opportunities for improving equality through technology. Their Prevent, Redress, Open, Train, Explain, Control, Test (“PROTECT”)  approach is an integrated framework to address the issue of algorithmic discrimination with the ambition to bring together the different possible tools, instruments, solutions and good practices. The steps are as follows:

Prevent: diverse and well-trained IT teams, equality impact assessments, ex ante “equality by design” or “legality by design” strategies.

Redress: combining different legal tools in non-discrimination law, data protection law etc. to foster clear attribution of legal responsibilities, clear remedies, fair rules of evidence, flexible and responsive interpretation and application of non-discrimination concepts.

Open: fostering transparency, e.g. through open data requirements for monitoring purposes (e.g. access to source codes).

Train: educating, creating, and disseminating knowledge on non-discrimination and equality issues among IT specialists; raising awareness about issues of algorithmic discrimination with regulators, judges, recruiters, officials, society at large.

Explain: explainability, accountability and information requirements.

Control: active human involvement (human-centred artificial intelligence), e.g. in the form of human-in-the-loop (HITL) systems designed to avoid rubber-stamping, complemented by supervision and consultation mechanisms (chain of control and consultation with users).

Test: continuous monitoring of algorithms and their output, setting up auditing, labelling and certification mechanisms.

According to Gerards and Xenidis, the prevention of algorithmic discrimination can be achieved through integrating various legal, knowledge-based, and technological measures. These measures include diverse professional communities that design and train algorithms, and the deployment of strategies called “equality by designthat offer guidance on the equality law framework to computer and data scientists. Assessments of equality and gender impact, which aimed to mainstream equality in algorithmic design, are also introduced. According to the authors, such prevention strategies can only be effective if two important prerequisites are met.

First, it is crucial to train and disseminate knowledge about the inequality challenges among society in general. This means IT professionals should be educated in gender equality and non-discrimination law in the same way as medical professionals receive ethics training. On the other hand, equality law professionals (practitioners, civil servants, judges, regulators, equality bodies, etc.), as well as citizens and public and private users of artificial intelligence (AI) tools, should be informed of the discriminatory risks linked to the use of AI and of existing debiasing strategies.

Second, IT professionals should pay close attention to the transparency and explainability of algorithms. The same goes for the availability of open and clean data: IT professionals and all stakeholders that are part of the creation of AI tools should try to work only with open and clean data. They are key for training and control purposes for the prevention strategies.

According to Gerards and Xenidis, constant monitoring of the AI tools is important to curb algorithmic discrimination. There should be testing mechanisms put in place to audit algorithms, particularly high-impact ones. Another option proposed by Gerards and Xenidis could be certification strategies by tech companies in order to guarantee that the algorithms they design and sell are not discriminatory. To my knowledge, no such certification strategy yet exists. The commitment by tech companies is especially crucial within the monitoring strategies which will have to improve the transparency, accountability and explainability of algorithms. The authors think that, “in line with the second dimension of the black box metaphor, the new horizons opened up by algorithmic technologies should be turned into opportunities to better detect and correct discrimination.”

Gerards and Xenidis think that human control plays a vital role in this integrated approach to algorithmic discrimination. When creating AI tools, there should be public collective supervision as well as individual human supervision, combined with a clear allocation of liability and legal responsibility, to foster active human control over decisions relying on algorithmic recommendations or predictions. The authors hope that such elements will discourage rubber-stamping and offset automation biases.

Finally, Gerards and Xenidis address a very important aspect of EU equality law and algorithm discrimination. They advocate for the legal redress that should be made available in the areas where it is lacking. As I discussed in an earlier Leading with AI article, the problem of algorithmic discrimination increases the weaknesses of EU equality law, such as intersectionality. Addressing algorithmic discrimination will mean reconsidering the gaps in the material scope of EU gender equality and non-discrimination.

In their final statement of their above-mentioned publication, the authors state, “Adapting and revisiting some of the core concepts of the EU equality doctrine will also be necessary in order to accommodate the changing nature of discrimination. Legal redress will have to be transversal and integrate gender equality and non-discrimination law with other legal areas, not least data protection law.”

I agree with the authors’ statement. As an example, some European Union member states prohibit the usage of sex-disaggregated data of their employees due to supposed data protection issues. However, without sex-disaggregating data of employees, it is nearly impossible to address the gender pay gap, which will then be concreted by AI tools (as described in my article on “Algorithm and bias in employment”). In order to address these challenges successfully and to ensure effective redress against algorithmic discrimination, it will be crucial that all relevant institutions cooperated in a proactive manner.

As a contributor to the “Leading with AI” newsletter, I will try to do my bit to learn and inform myself, other equality law professionals, and citizens (not only EU citizens, but all citizens worldwide) of the discriminatory risks linked to the use of AI and of existing debiasing strategies.

Share

*Katharina Miller is a change agent with legal tools for ethics and integrity in innovation and technology. She is also a European Commission Reviewer and Ethics Expert. She is co-editor of the book "The Fourth Industrial Revolution and its Impact on Ethics - Solving the Challenges of the Agenda 2030" and co-lead of the working group “Digital Equality” of the Berkeley Center on Comparative Equality and Anti-Discrimination Law of the Berkeley Law School.

Autonomous technology: Potential and challenges

By Rodrigue Anani *

Source: Huawei.com - Moving towards autonomous driving networks

Artificial Intelligence (AI) and its recent progress created the breeding ground for the development of new technologies, including autonomous technology. By autonomous technology, we mean any technology that can function without the intervention of humans.

It is easy to confuse the terms autonomous, automatic, and automated, and even use them interchangeably. However, it is important to highlight that they do not mean the same thing. According to Scott Totman, Chief Technology Officer at LendingTree Inc,  “the easiest way to distinguish between autonomous and automated is by the amount of adaptation, learning and decision making that is integrated into the system.”

Automated systems typically run within a well-defined set of parameters and are very restricted in what tasks they can perform. The decisions made or actions taken by an automated system are based on predefined heuristics (rules or set of instructions).

An autonomous technology goes a step beyond an automated technology by being “intelligent” and having the capacity to manage itself. Today, autonomous technology has stopped being the content of science fiction and has many uses in daily life, such as transportation (autonomous vehicles), military (military drones, lethal autonomous weapons), the retail industry (e.g. Amazon Go), healthcare (surgical robots), etc.

Autonomous technology presents many advantages. In the case of transportation, the use of autonomous vehicles can reduce road accidents, traffic congestion, carbon dioxide (CO2) emissions, lower fuel consumption, make transportation more accessible, reduce travel time and transportation costs. According to the United States House Energy and Commerce Committee ranking member Frank Pallone, Jr., “Self-driving cars have the potential in the future to reduce deaths and injuries from car crashes, particularly those that result from driver distraction.” The use of autonomous cars also presents the advantage of reducing traffic congestion, which in turn will also reduce carbon dioxide (CO2) emissions. The Future of Driving report from Ohio University states that:

Since the software will drive the car, the modern vehicle can now be programmed to reduce emissions to the maximum extent possible. The transition to the new-age cars is expected to contribute to a 60% fall in emissions.

The same report further states:

Combining digital maps and other technological tools with driverless automobiles will result in the more efficient driving experience. As of date, congestion on roads is causing urban Americans to spend close to 7 billion hours per year on the road, waste 3.1 billion gallons of fuel, and incur losses of around US$160 billion due to traffic congestion.

Autonomous vehicles can make our roads safer, help reduce carbon dioxide emissions, and save money.

Autonomous technologies will also play a more significant and crucial role in military operations. There are already several military drones or unmanned aircraft. They can collect real-time surveillance information needed for a quick decision. They can be used in conflict-ridden areas where it would be perilous to send military personnel to carry out targeted attacks, thus saving lives and costs. Military drones are also crucial when it comes to supporting high-level missions such as intelligence, surveillance, reconnaissance, and search and rescue.

Similarly, it is predicted that autonomous technology will impact the health industry. We can think of surgical robots, on-demand healthcare, etc. The Smart Autonomous Robotic Assistant Surgeon (SARAS) is a project that aims at enabling a single surgeon to execute operations. It is a cooperative and cognitive supervisory system able to infer the actual state of the surgical procedure from the sensing system and to act accordingly with the surgeon’s needs. Zipline is a company leveraging autonomous technology to provide medicines and the supplies necessary to save lives via drones. Zipline’s goal is to empower healthcare professionals to save lives through flexible and reliable shipping.

Even though autonomous technologies present tremendous potential and advantages, they also have some pitfalls.

Autonomous vehicles and drones, as described above, have many advantages. To work smoothly, they depend heavily on environmental conditions such as roads, weather, traffic, etc. Their sensors should work flawlessly, but that has not always been the case. There have been instances of the sensors used in cars malfunctioning in rainy or snowy conditions. As stated in the Future of Driving report:

California’s pilot program saw Google’s car suffering from one incident per about 1,250 miles. Volkswagen’s car faced an incident every 57 miles while Nissan experienced one incident every 14 miles.

The widespread use of driverless cars will raise new challenges. Who will be responsible in case of an accident? Should the cars be programmed to protect their passenger(s) or other drivers and pedestrians that might come into contact with the car? To what extent should they protect their passengers versus other people? How should those cars share road space with human drivers? What are the implications of driverless cars in terms of work? How many people will lose their jobs once they become widespread?

Similar questions arise when analysing the use of autonomous technology in military operations and health. For instance, who should be held accountable when a military drone hits the wrong target? What if the drone goes completely out of control? Do the existing regulations support the use of autonomous technology in the healthcare industry?

Because a scenario of zero risk does not exist, we must put in place the checks and balances that ensure we build a technology that is safe enough to use and which does not keep the human completely out of the loop.

Share


* Rodrigue Anani is a software engineer who has over five years of experience. Rodrigue is open-minded, pragmatic, and has a keen interest in building world-class solutions that have a positive impact on genuine and sustainable development. Rodrigue holds a Bachelor of Science in Information Technology from BlueCrest College, Ghana. He also holds a certificate in the “Internet of Things” delivered by the GSMA, and a certificate in Leading with Artificial Intelligence delivered by the Training Center of the International Labour Organisation and the Global Leadership Academy (GLAC). Mr. Rodrigue Anani has working experience and knowledge both in West Africa and North Africa countries.

Smarter learning organisations: How AI is influencing corporate learning

By Kim Ochs*

Source: Unsplash (Andrew Neel)

Opportunities to learn, grow, and obtain certifications are increasingly becoming standard employment perks, particularly at corporations, large NGOs and established companies. A 2019 study from Sitel Group in the United States found that 37% of current employees said they would leave their job if they were not offered training to learn new skills. “Learning in the flow of work,” a term coined by global analyst Josh Bersin, has become a trend and offices of human resources (HR) are expanding to include learning and development (L&D).

Content management systems (CMSs), learning management systems (LMSs), learning experience systems (LXPs), as well as learning content providers (e.g. LinkedIn Learning, Coursera), provide the technical infrastructure. Artificial intelligence (AI) is an important enabling technology in all of these systems. With their very diverse products and services, some of these vendors talk about a “Netflix of learning,” while others find the term problematic. As with Netflix, there are four main functions of these organisational learning systems, backed by machine learning and AI: aggregation, exploration, optimisation, and recommendation.

Aggregation is the first step, which brings all of the content together. A CMS, such as Microsoft SharePoint provides a solution for storing, managing, searching, delivering and sharing files and information. A LMS is used to administer, deliver, track, and manage content specific to training and education (e.g. course delivery, tracking assignments, grading, etc.). Often, an organisational single-sign-on (SSO) solution is implemented to give an end-user the ability to seamlessly access all of the content residing in both systems. At this stage, AI might be used for advanced tagging for structured and unstructured content, or to classify and extract information and then automatically apply metadata.

Next is exploration—the ability to search and find what you are looking for by theme, topic, and a variety of other search terms. Search terms are built on a taxonomy, usually starting with the vendor’s own taxonomy and often incorporating terms provided by the organisation. For example, if the organisation has offices in multiple countries, each of which has its own specific induction programme, the organisation might include their list of countries as part of the taxonomy so people can easily search and find their country’s specific induction programme.

AI and machine learning make optimisation possible. This allows system administrators and L&D managers to see which content people are interacting with, how long they are engaging, and identify where improvements can be made. The specific approach, depth of learning, and level of analytics provided varies across vendors, as well as their focus, which can be a distinguishing factor in system selection.

Finally, the systems make recommendations. As organisations look to include and integrate external learning content into their offerings, such as public blog posts or videos, or external content from learning providers (e.g. LinkedIn Learning, Coursera or EdX), a Learning Experience Platform (LXP) can be used. This leverages the existing technologies in the organisational learning system and uses AI to make recommendations to users, suggesting new and related learning content, or creating learning pathways based on experiences.

As always, when looking at the implementation of AI, it is important to ask: where are the risks that accompany these rewards? Or where does the human need to step in? At the organisational level, here are a few questions L&D administrators could ask as they are evaluating and implementing these systems:

1.     Do I have a good understanding of the taxonomy that will be used to make recommendations? Are there terms that could be problematic in the context of my organisation, or what specific terms should I provide to the vendors? It is advisable for the information technology team and human resources / L&D team to discuss these questions together, and develop consensus before the systems implementation is started. Also, keep the vendors updated on major changes in vocabulary and terminology, following strategic reviews or significant business changes.

2.     What content do we (not) want to be searchable for employee-learners? And are the original content creators aware of who might be looking at their content included in the index and search? Consider guidelines you might need to provide or expectations you might need to set with content providers.

3.     What content needs to be excluded, due to data protection laws (e.g. GDPR regulation) in the European Union)? How should it be managed to ensure it is excluded?

4.     How might we need to change administrative permissions among L&D or HR staff? Organisations with compulsory training or compliance requirements often connect the talent management and learning systems. For example, once an employee completes a training, their personnel file will be updated, and a reminder set up for the following year.  When such systems are linked, it is important to also check what information will be displayed and to whom.

5.     What information are you agreeing to share with the vendors? Check the terms and services carefully.


As an employee-learner, here are some strategies:

1.     Get very clear about your learning goals and priorities. Recommendations can be helpful, but not always. If you liked one show on a streaming service, the suggestion to “try this” might leave you completely satisfied or scratching your head and questioning the “intelligence” of the AI. Know your learning goals. Not every recommendation is going to be a winner.

2.     Be discerning in your content selection. Learning opportunities can seem endless, but time is still limited. Focus on finding learning opportunities that include reflection and critical thinking, and applied activities. Read comments and reviews. Talk to colleagues about which learning opportunities they found most relevant to the specific context in which you work.

3.     Apply what you learn as soon as possible.  Create ways to take the knowledge and use it in your work, and to implement the new skills. For many people, online learning can feel passive – watching videos, clicking boxes. Find ways to make the content active.

4.      Create learning communities. Find ways to discuss and share what you learn with your colleagues. Exchange ideas on an online discussion board, self-organise meetings, or meet offline over lunch. Share strategies on how to apply what you learn.

5.     Check your settings and permissions. Most of the learning solutions mentioned have mobile applications. As with any app on your phone, check your settings and permissions to share only the information you are comfortable sharing.

Share

*Kim Ochs has been active in the field of educational technology for more than a decade, spanning work in higher education, research, and start-ups, working with international organisations, NGOs, private companies, and edtech investors. Kim holds a doctorate in educational studies from the University of Oxford.


Save The Date: Wednesday, 26th May | 4pm CEST
Webinar: AI in Agriculture

Join us on Wednesday, 26th May at 4pm - 5pm CEST for our next Leading with AI Webinar on the topic of AI in Agriculture. Our speaker is Daniel Mutembesa, a research scientist at the Makerere Artificial Intelligence Lab in Uganda. His recent work includes large scale crop disease and pest sensing with smallholder farmers, social credit scoring, and deploying AI-based tools for in-field crop diagnosis to farmers around Uganda.

Subscribe to our newsletter to receive the details. Attendance is free.
Register here, via Eventbrite.

Speaker's Biography: Daniel Mutembesa is a research scientist and collaboration lead at the Makerere Artificial Intelligence Lab. He focuses on algorithmic game theory and mechanism design, behavioural and forecast modelling in crowdsourcing games, and applied artificial intelligence in the developing world.

His research covers algorithmic mechanism design of community sensing games for surveillance in agriculture and health, modelling participant behaviour in their unique low-resource settings, community graph networks, and machine learning models to forecast the risk burden of rural communities for diseases like malaria.

He is a recent grantee of the Facebook Mechanism Design for Social Good Research award.

We look forward to seeing you there to share the knowledge and lead with AI.

Algorithms and bias in employment

By Katharina Miller*

Photo by Possessed Photography on Unsplash

On 30th April, the California Fair Employment and Housing Council organised a public hearing on algorithms and bias. One aspect of the six-hour hearing was dedicated to algorithmic bias in the workplace. The council is part of the Department of Fair Employment and Housing (DFEH), the state agency responsible for enforcing California’s civil rights laws. This post is a report on the first speech of one of the invited experts, Aaron Rieke, on algorithms and bias in the workplace. It is important to mention that the hiring process, even without the use of artificial intelligence (AI) technologies, can already be a biased process. This article describes the hiring process “on-site” and compares it with the AI-based solutions.

Any hiring funnel, with or without the use of AI, starts with the “thinking” of a new position and about which person could fit the position. Normally, employers have an idea of their talent needs and the profile of the person that should fit that need. Biases can already creep in at this very early stage. If an employee has to be replaced, the employer is likely to search for a person with the same profile. If a new position needs to be created, the employer might already have an idea of the profile of the future employees.

The first step of the hiring funnel is sourcing candidates: when employers try to attract potential candidates to apply for open positions through advertisements, job postings, and individual outreach. During this process, if employers are not careful, they could be committing unlawful discrimination. For example, rather than an employer advertising for a “waitress” they should advertise for “waiting staff” or “waiter or waitress”. Rieke and his team checked some AI-based technologies which claim to avoid some of these biases and discrimination, such as applications that help employers create job descriptions. These applications are designed to reach more applicants and encourage a larger and more diverse talent pool, focusing specially on gender diversity. Rieke and his team conclude that such tools, while far from perfect, do help employers make job descriptions more inclusive.

According to Rieke and his team, problems can also arise at the advertising stage. Many employers use paid digital advertising tools to disseminate job opportunities to a greater number of potential applicants. Another expert at the hearing, Pauline Kim, argued that, “not informing people of a job opportunity is a highly effective barrier to applying for that position.” Rieke concludes that, “the complexity and opacity of digital advertising tools make it difficult, if not impossible, for aggrieved jobseekers to spot discriminatory patterns of advertising in the first place.” He comes to a similar conclusion regarding matching tools stating that, “tools that rely on attenuated proxies for ‘relevance’ and ‘interest’ could end up replicating the very cognitive biases they claim to remove.”

During the screening stage, employers assess candidates—both before and after those candidates apply—by analysing their experiences, skills, and personalities. At this stage, employers can often judge candidates based on their own biases. For example, they might reject women aged 25-40 years old because they are of child rearing age. Rieke and his team also checked AI-based tools to support the screening stage. Emerging tools assess, score, and rank applicants according to their qualifications, soft skills, and other capabilities in order to help hiring managers decide who should move on to the next stage. These tools help employers quickly whittle down their applicant pool so they can spend more time considering the applicants deemed to be strongest. A substantial number of job applicants are automatically or summarily rejected during this stage. With regard to the screening stage, Rieke and his team conclude that the resulting model will very likely reflect prior interpersonal, institutional, and systemic biases when screening systems aim to replicate an employer’s prior hiring decisions. This means that these kind of screening tools are also highly biased. This was also the case for Amazon’s AI recruiting tool, a well-documented case, which showed the tool replicated institutional bias against women.

The interview process is an opportunity for employers to assess applicants in a more direct and individualised way. However, employers should avoid asking questions that make assumptions about the candidates based on their protected sphere such as their family plans, etc. There are AI-based tools that claim to measure applicants’ performance in video interviews by automatically analysing verbal responses, tone, and even facial expressions. In their research, Rieke and his team focused on one tool by the company HireVue. This tool lets employers solicit recorded interview answers from applicants, and then “grades” these responses against interview answers provided by current, successful employees. More specifically, HireVue’s tool parses videos using machine learning, extracting signals like facial expressions, eye contact, vocal indications of enthusiasm, word choice, word complexity, topics discussed, and word groupings. The use of tools such as HireVue raises questions on multiple fronts, particularly related to ethical aspects. Rieke and his team detected that speech recognition software could perform poorly, especially for people with regional and non-native accents. Additionally, facial analysis systems can struggle to recognise faces of women with darker skin. Furthermore, some interviewees might be rewarded for irrelevant or unfair factors, like exaggerated facial expressions, and penalised for visible disabilities or speech impediments. On the other hand, using these kinds of biometric data might not have a legal ground if the data are used to predict workplace success, to make or inform hiring decisions.

During the final process of the hiring funnel, the selection stage, employers make final hiring and compensation determinations. At this last stage, women “consistently submit lower wage bids than men do.”  There are hiring tools that currently aim to predict whether candidates might violate workplace policies, or estimate what mix of salary and other benefits to offer. Rieke and his team worry that such tools might amplify pay gaps for women and workers of colour. As he stated, “human resource data commonly include ample proxies for a worker’s socioeconomic and racial status, which could be reflected in salary requirement predictions. In any case, offering employers highly specific insight into a candidate’s salary requirements increases information asymmetry between employers and candidates at a critical moment of negotiation.”

Rieke insists that all AI tools should be audited: quantitatively by using labelled demographic data to check outcomes and qualitatively by interrogating actual variables and job relatedness.

Rieke’s conclusion for policy makers is: do not fixate on AI. While personality tests and commonplace applicant tracking system features need scrutiny too, Rieke advises against focusing exclusively on statistical auditing, which works to the detriment of other forms of bias examinations. Rieke insists that outdated federal guidelines should be removed. Furthermore, Rieke encourages policy makers to require employers to show their anti-discrimination work and publish new standards and guidance that help scrutinize sourcing and recruiting practices.

Therefore, it is very important to work on the anti-discrimination in the real world in order to avoid its replication and “concreting” in the digital world. Because once algorithms are online and in use, it’s very difficult to stop their usage.

Share


*Katharina Miller
is a change agent with legal tools for ethics and integrity in innovation and technology. She is also a European Commission Reviewer and Ethics Expert. She is co-editor of the book "The Fourth Industrial Revolution and its Impact on Ethics - Solving the Challenges of the Agenda 2030" and co-lead of the working group “Digital Equality” of the Berkeley Center on Comparative Equality and Anti-Discrimination Law of the Berkeley Law School.


Save The Date: Wednesday, 26th May | 4pm CEST
Webinar: AI in Agriculture

Join us on Wednesday, 26th May at 4pm - 5pm CEST for our next Leading with AI Webinar on the topic of AI in Agriculture. Our speaker is Daniel Mutembesa, a research scientist at the Makerere Artificial Intelligence Lab in Uganda. His recent work includes large scale crop disease and pest sensing with smallholder farmers, social credit scoring, and deploying AI-based tools for in-field crop diagnosis to farmers around Uganda.

Subscribe to our newsletter to receive the details. Attendance is free.
Register here, via Eventbrite.

Speaker's Biography: Daniel Mutembesa is a research scientist and collaboration lead at the Makerere Artificial Intelligence Lab. He focuses on algorithmic game theory and mechanism design, behavioural and forecast modelling in crowdsourcing games, and applied artificial intelligence in the developing world.

His research covers algorithmic mechanism design of community sensing games for surveillance in agriculture and health, modelling participant behaviour in their unique low-resource settings, community graph networks, and machine learning models to forecast the risk burden of rural communities for diseases like malaria.

He is a recent grantee of the Facebook Mechanism Design for Social Good Research award.

We look forward to seeing you there to share the knowledge and lead with AI.

Loading more posts…