Artificial Intelligence and the Sustainable Development Goals: Resources and References

By Kim Ochs*

Source: UN.org


Disclaimer: The inclusion of apps and services featured in this article does not reflect any product endorsement. Links to commercial websites are not affiliate links.

Artificial Intelligence (AI) is positioned to play an important role in the race to achieve the Sustainable Development Goals (SDGs) by 2030. Satellite images, mobile phones, and big data have enabled the development and implementation of AI across developing countries in diverse applications and settings.

In the area of agriculture, for example, Aerobotics is helping African farmers use drones and satellite images to optimize tree and crop yields. In the area of conservation, WWF installed long-range cameras with AI in Malawi to detect poachers. AI is also being used to increase access to financial services, such as Kudi, a Nigerian company, with its Kudi.ai chatbot for bill payments available on messaging apps such as Skype and Telegram. Applications in global health include facial recognition technology being used to identify malnutrition in children, such as Kimetrica’s Methods for Extremely Rapid Observation of Nutritional Status (MERON) technology or Child Growth Monitor from Welthungerhilfe, or the use of teleradiology and AI to prioritise Covid cases in Africa.

As AI continues to be deployed in developing countries and emerging economies, at local, national, and regional levels, here are a few recommended resources to understand and follow the development of AI, and its impact in relation to the SDGs:

The nine Principles for Digital Development is a set of widely accepted best practices in the Information and Communications Technologies for Development (ICT4D) field: design with the user; understand the existing ecosystem; design for scale; build for sustainability; be data driven; use open standards, open data, open source, and open innovation; reuse and improve; address privacy and security; and be collaborative. (In April 2021, the Digital Impact Alliance, steward of the digital principles, has an upcoming webinar on applying the digital principals to AI regulations and responsibilities.) The digital principles were created in consultation with major donors, development agencies, and multinational organisations to create a community of practice for those working globally in digital development, which includes AI solutions.

As described in one of the nine guiding Principles for Digital Development, ‘well-designed initiatives and digital tools consider the particular structures and needs that exist in each country, region, and community’. Understanding the context and its complexities is important in the development and implementation of any AI solution. This report by the Inter-American Development Bank, Artificial intelligence and social good in Latin America and the Caribbean, provides a good overview of important contextual issues for AI and development in the region. Proceedings from the 2019 Regional Forum on AI in Latin America and the Caribbean, convened by UNESCO, also provide good background information.

Relevant to the African continent is the 2019 Sharm El Sheikh declaration, adopted by African Union member states. In addition to promoting and implementing the Digital Transformation Strategy for Africa (2020- 2030), it calls for the establishment of both a working group on AI based on existing initiatives and an AI think tank to “assess and recommend projects to collaborate on” in line of Agenda 2063: The Africa we want and SDGs.

At the global level, AI for Good is the leading United Nations platform on AI, organised by the International Telecommunication Union (ITU) and XPrize, which publishes reports and hosts various webinars aimed at business, government, civil society, and international cooperation. It also provides a platform for AI entrepreneurs to pitch an idea or start-up to the AI for Good Innovation Factory. Related initiatives include various focus groups, such as AI4Health, Machine Learning for Future Networks and 5G, AI for Environmental Efficiency, and AI for Natural Disaster Management, which produce white papers and case studies. AI is also an area of interest for the United Nations Development Programme (UNDP) Accelerator Labs, a learning network on sustainable development challenges, which reports that 29% of its lab team members can perform tasks related to AI and machine learning.

In the area of agriculture, according to the Food and Agriculture Organization of the United Nations, AI could play a significant role in achieving the goal to feed an estimated global population of nearly 10 billion by 2050. In 2020, the Rome Call for AI Ethics co-signed by the FAO, IBM and Microsoft, which summarises key definitions, rights, and principles to guide this work.

UNICEF’s Generation AI initiative, which is a partnership with The World Economic Forum, UC Berkeley, Article One, Microsoft and others, was created to set and lead the global agenda on AI and children. Among their outputs is a 2019 Memorandum on Artificial Intelligence and Child Rights.

In an earlier article in this newsletter about AI in education, I mentioned the Beijing Consensus on AI and Education, which is a seminal policy document that outlines recommendations for governments and other stakeholders working on sustainable development issues related to education.

FAIR Forward - Artificial Intelligence for All is an example of a cooperative AI initiative fostered by a national development organisation. The German Development Corporation (GIZ) partnered with five countries—Ghana, Rwanda, South Africa, Uganda, and India— to pursue three goals: “Strengthen local technical know-how on AI; Remove entry barriers to AI; and Develop policy frameworks ready for AI.” Activities to date have included support for the SmartAfrica, which is developing a pan-African AI policy blueprint; the Lacuna Fund to mobilise funding for labelled datasets, and the development of open voice technology in local languages.

Private sector companies, spanning start-ups to multinationals with AI products, are engaged in major efforts working towards the SDGs. The AI for Sustainable Development Goals (AI4SDGs) Think Tank provides a useful search tool to browse by goals and identify initiatives by companies, start-ups and partnership activities that are addressing specific goals. Examples include The AIY Vision Kit from Google (addressing SDGs 4, 8, and 9), The City Brain project, created by DAMO Academy, Alibaba (addressing SDGs 9 and 11), and FUJITSU’s project to use deep learning to estimate the degree of internal damage to bridge infrastructure (addressing SDGs 9 and 11).

2030Vision, which is a partnership of businesses, NGOs, and academia hosted by the World Economic Forum, published this overview report, AI & The Sustainable Development Goals: The State of Play, which provides some helpful definitions and highlights examples of AI projects and initiatives relates to all of the SDGs. Among the partners is Microsoft, with its AI for Good initiative, which publishes calls for grants to support projects and ideas to solve global challenges in humanitarian action and other SDG-related areas.

Alliance4AI is a consortium of start-ups, researchers, and organisations working on or with AI in Africa. Many of the 100 African AI start-ups they profiled focus on sustainable development and related challenges in the areas of agriculture, healthcare, and accessible financial services.

AI is projected to play a significant role in development. According to consulting firm PWC, the use of AI for environmental applications could reduce global greenhouse gas emission by around 1.5 – 4.0% by 2030. Innovative partnerships and alliances will be important to realise the potential of AI, not only to address climate change, but also the SDGs.

*Kim Ochs has been active in the field of educational technology for more than a decade, spanning work in higher education, research, and start-ups, working with international organisations, NGOs, private companies, and edtech investors. Kim holds a doctorate in educational studies from the University of Oxford.

Share


The next edition of the Leading with AI newsletter will be published on Thursday, 15th April. We wish everyone happy holidays.

How AI will continue to change the nature of work

By Claudia Pompa*

Photo: Getty Images

Throughout history, technology and innovation have fostered changes in the workplace, especially in the intervening decades after the Industrial Revolution.  However, the current technological advances in artificial intelligence (AI), and concomitant potential for massive disruption across multiple fields, sectors, and geographies, are unprecedented.

There are two major schools of thought that debate how AI might affect work in the future. On the one hand, there are those who think AI and new technologies will create new jobs as humans adapt, and on the other hand, those who think AI will destroy jobs as we may be unable to cope with the speed of change. Indeed, fast technological innovation has been responsible for both job creation and job destruction. The World Economic Forum predicts that by 2025, 85 million jobs might be displaced due to the change in division of labour between humans and machines. At the same time, 97 million new roles that are better adapted to the new division of labour between humans, machines and algorithms are also likely to emerge

As AI continues its expansion, there are large questions still looming as to exactly which jobs will be destroyed and which will be created. While highly skilled workers able to master a fast-changing technological environment will find themselves in high demand, low-skill workers face increasing competition and decreasing wages. Workers must learn to cope with a world in which AI will fundamentally change what they do and how they do it, as innovation continues to change the nature of most jobs.

However, it is important to be reminded that technological innovation does not exist in a vacuum—people are at the heart of development and implementation—and the relationship between humans and technology requires complex thinking from employers and businesses alike.  Various policy and academic debates are divided on what will be the real impacts of AI—is it a real “race against the machines” as jobs are lost and income inequality rises? Could larger slowdowns in productivity growth in developed economies mean new innovations have little impact on growth? Or could algorithms augment and improve human performance in the workplace?

These differences arise from a larger debate as to what technological change actually means—are the jobs themselves destroyed or simply altered? Building the algorithms still requires labour, as does upkeep and maintenance of the technology, and new markets for these advances continue to open. While routine tasks and non-person-to-person communication in sectors like customer service are likely to be automated, those jobs that require human interaction, creativity, adaptability, discretion and social skills will be the most difficult (if not impossible) to automate.

In fact, while automation may seem inevitable to some, the ability to automate does not necessarily translate directly into practice. Employers and businesses must consider costs of developing and deploying the technology, the supply, and demand from providers and customers, and the actual labour costs themselves—abundant and cheap labour still define a large portion of the world, and compared to automation, manual labour may remain the cheaper alternative in certain sectors.

As technology augments workers’ ability to complete complex tasks, those companies and sectors with highly skilled workers will be best positioned to take advantage of these new and/or improved employment sectors. Employers and companies seeking advantage within a changing system are those that can stimulate innovation, manage and mobilise resources towards new ventures, and adapt to the new norms of competition.

*Claudia Pompa is the founder and managing director of the Consulting for Growth. She specializes in issues related to the future of work and workforce development programs and has extensive experience in digital economies, innovation, economic growth, start-ups and SMEs.

Share


Free Event of the AI Leadership Academy
31st March 2021 @ 18.00 CET (Brussels)

AI can optimize every aspect of life. But should we?

Click here to register via Eventbrite

Biography: Alec Balasescu Ph.D. is an anthropologist by training, or as some would say a Philosopher with data. He approaches the world, and his work through the lenses of this science. Alex finished his Ph.D. at UC Irvine in 2004, and has been active in both public and private domains in various capacities while continuing to teach in different university settings, both online and in class. He lives in Frankfurt, Germany, and teaches in Victoria, BC, Canada at Royal Roads University. To know more about Alec Balasescu, Ph. D., please visit alecbalasescu.com.

Register via Eventbrite for the event.

Subscribe to the newsletter to learn about future events.

Offloading to AI: Artificial Intelligence in the World of Work

By Kim Ochs*

Photo by Markus Winkler on Unsplash

Disclaimer: The inclusion of apps and services featured in this article does not reflect any product endorsement. Links to commercial websites are not affiliate links.

Artificial intelligence (AI) tools, bots, and apps are already impacting the way humans work in areas such as journalism, marketing, scheduling and project management. They are being used to optimise work, streamline operations, and even try to contribute to creative processes. Today’s applications lead to this frequently asked question: Is the role of AI to replace humans? 

In May 2020, Microsoft announced that it would be laying off journalists and copywriters to replace them with AI at Microsoft News and MSN.com, affecting editors who had been part of the Search, Ads, News, Edge (SANE) division. As a Microsoft spokesperson said in a related Guardian article, “Like all companies, we evaluate our business on a regular basis. This can result in increased investment in some places and, from time to time, re-deployment in others. These decisions are not the result of the current pandemic.” This larger optimization strategy echoes the language of offshoring and outsourcing. In 2021, the new trend is offloading to AI.

Headlime is a copywriting bot that can be used to help come up with ideas for an article. You describe your product or service in 10 words of less and the AI “trained on over 175 billion parameters” will take your idea and draft something. It writes the copy for you. The technology behind this is called GPT-3, developed by OpenAI, an AI lab backed by Elon Musk.

WriteSonic, another copywriting bot and marketing tool, can convert one-liners into ads, descriptions or ideas in 10 different languages, including English, Spanish, Chinese, Russian, German and Japanese. An example featured on the website shows how you can enter in the product name, brief description, any promotion or occasion (such as a holiday discount) and it will generate multiple ads (e.g., a Facebook ad complete with an image) to reflect the tone and subject of the ad. You can edit the ad or ask for more examples to be instantly generated.  

Such tools are very good in applying accumulated knowledge to the task of writing at an incredible speed, almost instantaneously, but they cannot conduct ethical checks. Nor can they pass judgements about what would or would not work well based on nuance or years of experience with an audience. The current discussion about misinformation, fake news, and disinformation in journalism, which could actually be the work of AI bots, is raising some important questions about reliability, quality, and automation. AI is taking jobs, but it cannot completely replace humans. At least not yet. 

Not only do companies offload to AI, so too do individuals. Professionals and entrepreneurs use AI to conserve and optimise their most important resource: time. They are asking: What would I rather spend my time doing? What is the important work I want my staff to do? What could the machines do better? Cue the new tasks for personal AI assistants on the mobile phone and at home such as Siri, Alexa, and Cortana, as well as new digital helpers at the office.  

As filmmaker Tiffany Schlain asked in a Cool Tools podcast episode, why have a personal assistant spend their scheduling meetings when you can have Clara, the AI scheduling bot? A product of Clara Labs, Clara can be copied in an email conversation to figure out the best time for people to speak across different time zones. As Schlain reported, “I can’t tell you how many times people are like, ‘You have the best assistant, Clara.’ And then I feel like I’m breaking their heart when I’m like, she’s a bot.” It is notable that Clara has her own human assistant back at the lab. Yet again, there is a “human in the loop”.

In addition to the virtual assistants and scheduler tools, there are AI tools for social media and optimization (e.g. Lately, HubSpot, LinkFluence, Cortex), project management analytics and reporting (e.g. Aptage, ClickUp, Workstreams). The machines are simply better at analysing huge volumes of data very quickly. But humans, empowered with these data and analytics, are essential to critically interpret and communicate the results in meaningful ways across different audiences and cultures. 

The question, “Will AI replace humans?” is similar to the question we have been asking for decades: “When will we go paperless?” Neither one leads to a constructive discussion. I have yet to meet a digital native who took down the paper-based photo of their great-grandparent, scanned it, uploaded to their phone, and discarded the original.  

The far more important questions we should be asking are the underlying ones about the human experience. Which human behaviours and actions do we need to understand better before we offload them to AI and machine learning systems? How and when must humans remain in the loop to ensure ethical uses of AI and humane technologies, guarding against algorithmic biases and protecting workers’ rights? What is essential to change about the world of work, particularly in the context of threats to humanity such as climate change? 

The new world of work is unknown. As stated in an Institute for the Future report, many of the jobs people will have in 2030 do not yet exist. This could even include “new, uniquely human” jobs. Over the next decade, we need to collectively debate and actively decide how machines and humans can collaborate rather than compete with us, ensuring that AI supplements rather than supplants us. 

*Kim Ochs has been active in the field of educational technology for more than a decade, spanning work in higher education, research, and start-ups, working with international organisations, NGOs, private companies, and edtech investors. Kim holds a doctorate in educational studies from the University of Oxford.

Algorithmic discrimination - Equality in the digital era: AI and anti-discrimination law in Europe

By Katharina Miller*

In February, the Leading with AI team organised its first event on “Equality in the digital era: AI and anti-discrimination law in Europe”. Our speaker was Raphaële Xenidis, who is lecturer in EU law at the Edinburgh University School of Law and Marie Curie Fellow, iCourts, at the University of Copenhagen. The event focused on algorithmic discrimination, with a particular emphasis on the European approach to regulate the situation.

This article gives an overview of topics addressed during the event.

An algorithm is a set of computer instructions used for problem-solving purposes which produces a value output based on input data. It is based on rules (if a condition A is met, an outcome B should follow) and on machine learning, i.e. to autonomously adapt, evolve, and improve to optimise any given outcome based on any input data without being explicitly programmed to do so. The bias gets into the equation or algorithm because the human being who sets the computer instructions brings in her or his own bias, and this can happen consciously or unconsciously.

In this context, algorithmic bias is a systematic error in the outcome of algorithm operations. The overall ethical standard that has been agreed on in order to avoid algorithm bias is fairness. And fairness means a set of procedures aiming to avoid bias so as to ensure outcomes that respect ethical standards such as acknowledgement of human agency, privacy and data governance, individual, social and environmental wellbeing, transparency and accountability, and oversight.

There are many examples of algorithmic discrimination. For example, facial recognition applications perform much worse at recognising black women’s faces than white men. Similarly, a Google search on “professional” hair show mostly pictures of white women while a search for “unprofessional” hair displays predominantly pictures of black women.

Another example is Microsoft’s AI chatbot @TayandYou, which Microsoft described as an experiment in "conversational understanding." Microsoft launched its AI chatbot in Twitter, and it took less than 24 hours for Twitter to corrupt this AI chatbot. A senior reporter for The Verge, James Vincent, described the discrimination and stated that shortly after the AI chatbot’s launch people started tweeting the bot with all sorts of misogynistic and racist remarks. As a consequence and because the AI chatbot was essentially a robot parrot with an internet connection, “it started repeating these sentiments back to users, proving correct that old programming adage: flaming garbage pile in, flaming garbage pile out.”

The origins and causes of algorithmic discrimination reflect existing discrimination in our offline, real world where humans discriminate against each other. It starts with our own human stereotypes that have led to discrimination in the past (such as men are strong and women are weak, or assumptions about racial stereotypes). The consequences are structural inequalities. Stereotypes and biased conduct enter—consciously or unconsciously— into the design of an algorithm. This leads to the generation of biased data, such as in the cases explained before.

If societies want to avoid repeating the same patterns of bias and discrimination that we witness in the ‘physical world’, algorithmic discrimination needs to be addressed. Otherwise, we risk creating a digital world that replicates structural inequalities.

First, there are scope-related shortcomings. For example, there are gaps related to online discrimination of consumers beyond gender and race. While there is protection against algorithmic discrimination in the media, advertising and education, there is no protection so far for algorithmic discrimination against the digital gender pay gap among platform workers who provide a service in return for money (e.g. individuals who use an app, such as Uber, or a website, such as Amazon Mechanical Turk, to match themselves with customers).

Miriam Kullmann, a researcher at WU Vienna University of Economics and Business and Harvard University Weatherhead Centre for International Affairs, described algorithmic discrimination of platform workers in her article from 2018, “Some female platform workers receive lower pay than their male counterparts”. Kullmann further describes that some online platforms use algorithms to determine pay levels. The key question which should be addressed here is:

to the extent to which current EU gender equality law, and the principle of equal pay for women and men in particular, is adequate for protecting platform workers in a situation where work-related decisions are not taken by a human being but by an algorithm that is the potential source of discrimination.

Furthermore, there are some conceptual and doctrinal frictions (in EU legislation) such as, for example, intersectionality. According to Oxford English dictionary, intersectionality is “the interconnected nature of social categorizations such as race, class, and gender, regarded as creating overlapping and interdependent systems of discrimination or disadvantage.” The protected grounds of discrimination reflect a “single-axis”-model and do not protect against the granularity of profiling or subcategorization, which can also lead to the invisibility of many people. In other words, the aforementioned example of algorithmic discrimination in facial recognition ignores intersectionality.

There are also procedural difficulties in establishing proof of discrimination with the lack of transparency of 'black box' algorithms, lack of explainability obligations, and the opacity of proprietary algorithms. The AI black box problem refers to the inability to fully understand why the algorithms behind the AI work the way they do. Further procedural difficulties include allocating responsibility within fragmented chain of actors and complex human-machine interactions and attributing liability across multiple legal regimes in complex and composite AI systems.

In her conclusion, Raphaële Xenidis spoke about the solutions to algorithmic discrimination and opportunities for improving equality through technology. She explained the Prevent, Redress, Open, Train, Explain, Control, Test (“PROTECT”) approach and discussed how technology opens new possibilities such as the detection of discrimination in algorithmic and human decision-making (e.g., with the auditing of algorithms). This is a process of analysing and processing data, and understanding how algorithm developers are making decisions and where all the data is actually coming from.

There might be a potential increase in replicability and accuracy of decision-making, and finally debiaising strategies could be implemented in machine-based: e.g. bias minimisation and mitigation techniques. There are some EU funded project that promote “Ethics by Design” when creating algorithm, such as SHERPA or SIENNA. However, this discussion has just started and there is a long way to mainstream PROTECT or the algorithm audit. Within “Leading with AI” we shall accompany these discussions and continue writing about this topic.

Share

*Katharina Miller is a change agent with legal tools for ethics and integrity in innovation and technology. She is also a European Commission Reviewer and Ethics Expert. She is co-editor of the book "The Fourth Industrial Revolution and its Impact on Ethics - Solving the Challenges of the Agenda 2030" and co-lead of the working group “Digital Equality” of the Berkeley Center on Comparative Equality and Anti-Discrimination Law of the Berkeley Law School.

Overview of AI in healthcare in Indonesia

By Jum’atil Fajar*

The Ministry of Research and Technology/National Research and Innovation Agency launched the National Strategy for Artificial Intelligence (NSAI) on August 10 2020. One of the priorities in this strategy is healthcare. 

Although the NSAI has been launched recently, research on artificial intelligence (AI) in the health sector in Indonesia has been carried out since 2000. At the time, researchers used an expert system to help diagnose tuberculosis. Expert systems are computer systems that mimic the decision-making abilities of a human expert. 

The expert system continued to be used by students majoring in technology and information in their research until 2019. The system is currently being used to help diagnose various infectious and non-communicable diseases such as pertussis, diphtheria, heart disease and stroke. Timotius Indra Kesuma, Director of Research, Development, and Innovation of the Indonesia Artificial Intelligence Association (IAIS), explained that expert systems do not require a lot of data. That is why this system is widely used by students in their final projects. Examples of the type of research being conducted using expert systems include the Deep Learning Approach for Classification of Hypertension Retinopathy, the result of doctoral research by Bambang Krismono Triwijoyo. This research is currently at prototype level.  

Herdiantri Sufriyana, the main researcher of “Artificial intelligence-assisted prediction of preeclampsia: Development and external validation of a nationwide health insurance dataset of the BPJS Kesehatan in Indonesia”, explained that sometimes research can be hindered by the lack of access to data.  Sufriyana relies on the use of the medical history of patients recorded between health service facilities (clinic/community health center/hospital). However, at the moment, health service facilities are not able to share patient data between them, creating a series of challenges. To overcome this problem, Sufriyana improved the model so that it could be used by health service facilities without having to access the patient's disease history in other facilities. Sufriyana plans to focus on preeclampsia prediction and will continue conducting clinical trials. 

The ability of AI to diagnose diseases is also being used in several health applications that can be accessed via smartphones or websites. The Prixa website can provide a diagnosis of the symptoms experienced by visitors and provide recommendations on what to do next. The Android app, PeduliLindungi, an application developed to assist relevant government agencies in tracking to stop the spread of COVID-19, has a chatbot that can be used to assess the disease’s symptoms. 

One startup in Indonesia, Widya Imersif Teknologi, has successfully developed a smartwatch with features that can measure body temperature, heart rate, blood oxygen levels, blood pressure, cholesterol and blood sugar levels, stress levels, geofencing, sleep monitoring to activity statistics and the number of calories burned. Most of these features take advantage of AI. 

Laboratory services that utilize AI have been carried out by Neurabot. This laboratory offers tele microscopy, where the hospital simply sends a photo file of laboratory samples and the AI technology helps to identify and count cells to predict cancer. 

In radiology services, AI has been used to diagnose lung disorders due to COVID-19. In addition, many hospitals are already using wireless endoscopy capsules. This technology is equipped with a marker indicator that makes it easy for doctors to mark the location of bleeding in the gastrointestinal tract. 

Nationally made COVID-19 detection technology that utilizes AI, GeNose, obtained a distribution permit from the Ministry of Health on December 2020. This technology can predict infection by simply analysing the breath of the person being examined. 

Progress in the use of AI in the health sector over the years has been considerable and the sector continues to flourish with the emergence of innovative startups and research. Yet, to take full advantage of their benefits, more needs to be done in order to be able to apply more of the research currently being conducted to health services. The launch of the NSAI is a welcome and awaited move on that direction as it will help provide the boost needed for this to happen.   

Share

*Jum’atil Fajar is an AI enthusiast. He holds a Masters degree in Health Sciences. He helped develop the hospital management information system. He currently manages the Hospital Accreditation Data Management Information System.

Loading more posts…