Artificial intelligence and sustainability: AI4Good or AI4Bad?

By Hasna Abdelwahab*

How often do we link terms like data science, artificial intelligence (AI), and machine learning with futuristic advancement, such as highly sophisticated robots and space ships as public transport?

Why do we not associate them with a greener area, cleaner air, or flourishing biodiversity?

Fourth Industrial Revolution technologies such as AI are enabling humanity to harness information and data to revolutionise education, energy, healthcare, agriculture, transportation, and many other service areas. AI helps us makes the world a better place, from traffic management in urban mobility to enhancing the efficiency of renewable energies to predict crop needs and other innovative solutions in smart agriculture. AI is becoming a key tool for facilitating a circular economy and building smart cities that use their resources efficiently.

Linking AI with the Sustainable Development Goals (SDGs) contributes to design for a healthier planet, addressing current needs without compromising future generations due to climate change or other major challenges. According to a study published in Nature (2020) AI could help achieve 79% of SDGs, indicating that AI may act as an enabler on 134 targets across all SDGs, generally through a technological improvement, which may allow to overcome certain present limitations.

However, driving a positive change with AI could also have a negative impact on the three pillars of sustainable development (society, economy, and environment). Starting with the environment, the climate impact of AI was noticed in machine learning (ML) programmes that require increasing energy power and that favour accuracy over efficiency, resulting in big experiments that often run without attention to their digital carbon footprints.

A well-known study by Emma Strubell, Ananya Ganesh, and Andrew McCallum (2019) illustrated that the process of training a single, deep learning, natural language processing (NLP) model can lead to approximately 300,000 kg of carbon dioxide emissions. This is the same amount of carbon dioxide emissions produced by five cars over the cars’ lifetime. Land exploitation is also an issue, including metal extraction and e-waste that result from the need to collect, store, analyse all that data, requiring significant amounts of processing power and an increase in energy consumption.

The use of AI across sectors is expected to increase the global GDP up to 14% in 2030. Although AI is seen as an engine for development and economic growth, AI might lead to widening gap between developed and developing countries, causing more negative economic impacts. Unemployment is a great concern for humanity as robots replace people and new production processes that no longer need humans change the labour market, redefine jobs, and lead to greater inequality.

AI has a long way to go when it comes to global regulations and public policies, as there are no international laws that regulate the recent technology. A lack of governance over AI is one of the reasons that AI is bias. Who builds the AI, and who develops intentional-biased algorithms? Seventy-five percent of all new digital innovation and patents are produced by just 200 firms from the West. Out of the 15 biggest digital platforms people use, 11 are from the United States and the rest are Chinese.

The lack of gender, racial, and ethnic diversity in the AI workforce is one of the reasons AI is bias. The injustice of inequality and lack of inclusion in datasets is another problem.

Sustainable AI requires more effort when it comes to the triple bottom line of people, planet and profit to achieve sustainable development. Harnessing the technological advancement for the greater good has to be carefully addressed and wisely progressed, so that the Fourth Industrial Revolution will not be the last.

Share


*Hasna Abdelwahab is a young sustainability and social impact professional who has led private sector entities to impactful community development projects as well as introduced the concept of corporate sustainability in few operations. She worked in DAL Group, MTN, Haggar Group & Morouj Commodities. She has implemented hundreds of social responsibility projects and worked to remodel the concept of CSR from charitable giving to sustainable Impact. She is also a climate change activist, taking part in various international platforms such as conferences and panels. Hasna holds a masters degree in renewable energy.

Hasna is an active member in Sudan Environmental Forum, and he Global Shapers Community. She also served as coordinator in the IEEE, organiser in TEDx and Coordinator in SDG Hub.

As a communication enthusiast, she writes for newspapers and magazines on issues of interest.

Is artificial intelligence justly distributed?

A look at the European Union AI Act

By Marijana Šarolić Robić*

[Source: Orgalim]


We are all aware of daily data overflow and continuous, almost unperceived, changes driven by technology in the way we live and work. It is becoming extremely difficult and complex to monitor, audit, and implement changes driven by technology in our everyday lives, both personal and professional.

This increased complexity has had a significant impact in the field of artificial intelligence (AI). The Hype Cycle for Emerging Technologies 2021, published by Gartner, points out the continued growth in the AI field.  As the authors put it,

For example, generative AI is an emerging technology that the pharmaceutical industry is using to help reduce costs and time in drug discovery. Gartner predicts that by 2025, more than 30% of new drugs and materials will be systematically discovered using generative AI techniques. Generative AI will not only augment and accelerate design in many fields; it also has the potential to “invent” novel designs that humans may have otherwise missed.

We are already using AI daily in different tools and apps, such as social media, e-commerce, digital payment solutions, delivery services, gaming etc. Aware of the momentum, European Union (EU) regulators have been addressing AI use and related challenges since April 2018 and are currently discussing the proposed AI Act, which introduces a risk-based approach and foresees the establishment of a central EU register of high-risk AI systems with permanent compliance requests throughout AI systems life cycles. However, if you ask an average EU citizen about the AI Act, they are unlikely to know about it.

Related issues of accessibility, transparency, and skill sets required by EU citizens to use such technologies also need to be mentioned. Without the involvement of all stakeholders (policy makers, citizens, NGOs, academia, businesses, etc.) working permanently on awareness and education campaigns, the legislative change will bring hardly any change to regular citizens.

In the existing draft of the AI Act, the burden of implementation seems unfairly laid on industry shoulders, i.e., on small and medium enterprises (SMEs) – generally start-ups – bringing their AI-based products and services to markets and consumers. Under the proposed existing draft regulation, SMEs would have to leverage the burden of AI implementation and compete with global market players and companies originating from far less regulated markets. One can only hope that SMEs would be successful in such compliance procedures, but one cannot expect this without the active involvement of the above mentioned stakeholders in public campaigns.

Moreover, companies are not the sole stakeholders in the AI market, let alone sole beneficiaries. Placing responsibility solely on them might result in such companies leaving the EU market area and offering their AI-based products and services to other markets. EU citizens might find themselves in a position where AI-based products and services might not be available to them in EU markets due to overregulation and the inability of SMEs to comply with EU regulatory frameworks. This could lead to a loss of value creation for all stakeholders.

For example, if a start-up in the agricultural industry (e.g., providing an AI-driven solution for early warning systems of weed detection and elimination) is unable to satisfy the required regulatory framework, it could simply leave the EU and provide its services to farmers outside the EU. Similarly, a company providing wellbeing and lifestyle AI software, which helps prevent or even eliminate diseases, such as high pressure or high cholesterol, or provides recommendations on nutrition, exercise, lifestyle, and regular monitoring of user health status, could leave the EU market.

Educational campaigns like the ones initiated during the Finnish EU presidency in 2019, providing free online educational tools on the basics of AI, could make a real difference on tackling accessibility and awareness issues. Such educational campaigns might help to support a more just distribution of AI benefits to average citizens by empowering them to identify such benefits in their private and professional lives (e.g., easier, cheaper, more accessible transfer and access to financing through various fintech solutions, better control and monitoring of different health conditions, etc.). However, an active public campaign and the promotion of a lifelong learning lifestyle is a precondition.

The world of today is becoming more inaccessible and more difficult to understand to people who do not keep up with technological changes. If we continue on this path, the result could be the creation of different citizen classes: technology literate ones and technology illiterate, technology consumers or technology makers. As Christine Lagarde explained in her February 2021 podcast episode for The Economist, we must jointly (re)define the values and principles we hold to be pillars of our civilization and ensure that all stakeholders take part and have access to such process within democratic frameworks.

Only open dialogue and fair distribution of benefit and burden will result in a more fair and just distribution of AI. The initial step must be taken by each of us individually, by constantly learning and updating our knowledge in field of technology, AI included. We must strive to become not only technology users, but also technology makers and ensure a focus on just and fair distributions of AI technology benefits for all stakeholders. It is important to understand and remember that a technical background is not precondition, as AI is already embroidered in many aspects of our life. What we need are experts from all walks of life to jointly work on technology creation, development, implementation, and usage. There is room for everyone in the AI world. Do come and join us!

Share


*Marijana Šarolić Robić is an AI enthusiast who has worked as a lawyer for almost 20 years. Since 2013, Marijana has supported the local start-up community as a mentor and has been an active stakeholder in the creation of the Croatian Artificial Intelligence Association (CROAI). Her field of work is the technology driven economy, with a specialty in shareholder relations, incorporation, and regulatory frameworks. She finished her EMBA in 2015 and has been one of the founders and President of PWMN Croatia/PWN Zagreb, an NGO that promotes gender equity in business environment since 2016.


Today’s Event Postponed

Due to unforeseen circumstances, today’s scheduled event with the author of this article, Marijana Šarolić Robić, has been postponed until 2022.

Further details will follow.

The role of partnerships in Artificial Intelligence (AI) for education and training

By Sylvia Mukasa*

This article centres on the roles of different players in supporting the use of artificial intelligence (AI) in education and training; What partnerships can be formed to improve the availability and use of AI for education and training? What are examples of multi-sectoral partnerships and their goals aimed at enhancing or leveraging AI in the sector ? Which benefits derive from partnerships? And finally, what are recommendations for meaningful partnerships?

The term “skills gap” describes a fundamental mismatch between the skills that employers rely upon in their employees, and the skills that job seekers possess. This mismatch makes it difficult for individuals to find jobs and for employers to find appropriately trained workers.

In the Harvard Business School series “Managing the Future of Work,” Joseph Fuller argues that business leaders must champion an employer-led skills development system to source and create robust talent pipelines, which requires cross-sector collaboration between government, education, and business.

In July 2021, at the High-level Policy Forum organised by the Association for the Development of Education in Africa alongside development partners, ministers of African countries spoke of the challenges their Technical and Vocational Education and Training (TVET) systems experience, including mismatches between graduates’ skills and the labour market needs. From the key takeaways, it emerged that educational technology (EdTech) can offer new ways to customise the learning experience, allowing developing countries to integrate world-class training content and to enhance skills recognition and transferability across markets. Another takeaway was the need for partnerships, especially private sector engagement, in helping individuals acquire relevant skills.

Partnerships to improve the availability and use of AI for education and training

According to eLearning Industry, 47% of learning management tools will be enabled with AI capabilities in the next three years. Machine Learning (ML) and AI are key drivers of growth and innovation across all industries, including the education sector. AI is and will continue to bring new opportunities for enhanced learning, new forms of learning and offer more flexible lifelong learning pathways. The COVID-19 pandemic also accelerated some of these changes, forcing educators to rely on technology for virtual learning; 86% of educators now say technology should be a core part of education. These emerging trends call for the education sector to partner with EdTech providers to address the learning environment, policy, and societal challenges.

Other partnerships are aimed at, but not limited to:

a) Developing new digital skill learning programmes on scaling up AI skills.

b) Partnerships in AI education that can accelerate achieving Sustainable Development Goal (SDG) 4 (Quality Education) and other SDGs.

c) Helping institutions and governments launch broad national or regional education, empowerment, and innovation initiatives to: educate teachers and students; launch digital learning platforms; initialise empowerment programmes for young entrepreneurs and start-up teams; promote local innovation initiatives, talent, and start-up competitions; and inform policy formulation.

Partnerships to enhance or leverage the use of AI in education and training and their goals

Rapid deployment of advanced technologies such as cloud computing, automation, and AI means that to enter the current workforce, a new set of skills is required. With this in mind, Ericsson and UNESCO have combined their respective strengths to educate and create opportunities to scale up AI skill development for young people and to meet the global SDGs.

Their AI for youth initiative seeks to:

i. Develop and manage a repository of AI and other key digital skill training courses that will be available globally

ii. Build capacities of master trainers from selected countries around the globe with advanced knowledge of AI skill development.

iii. Support master trainers to mobilize AI hub centres and hackathons to train young people on developing AI applications.

Mobile Learning Week (MLW) is another UNESCO collaborative initiative that focuses on the evolving dynamics between AI and education.

However, it is important to recognise that the examples above are not exhaustive. There are other partnerships between government, industry, academia, and others to enhance or leverage the use of AI in education and training.

Benefits of partnerships to policy makers and other stakeholders

Rapid technological advancements inevitably bring multiple risks and challenges, which often outpace policy debates and regulatory frameworks. UNESCO, through its AI readiness self-assessment framework, is committed to supporting member states to harness the potential of AI technologies for achieving the Education 2030 Agenda at a national level, while ensuring that the application of AI in educational contexts is guided by the core principles of inclusion and equity. The UNESCO guide for policymakers in AI and Education aids policymakers in understanding AI and responding to the challenges and opportunities in education presented by AI. It introduces the essentials of AI such as its definition, techniques, technologies, capacities, and limitations. It also delineates the emerging practices and benefit-risk assessment on leveraging AI to enhance education and learning. This project is implemented by UNESCO in partnership with Microsoft, the Weidong Group, TAL Education Group, and is open to a multi-stakeholder partnership approach.

Recommendations for meaningful partnerships

Governments should invest in improving the quality of education for all, to lay the foundation for integrating AI into the economy more seamlessly. It is a critical to ensure that existing imbalances in general and technology-specific skills do not continue to act as barriers for disadvantaged populations to join the AI workforce. Countries should also not expect the private sector to drive their national development agendas.

In developing countries, attempts to establish a trained workforce for successful AI deployment should be approached with care to prevent exacerbating social inequalities. While rapid skills development can produce short-term boosts in AI skills in the long-term, the result might be a work-ready, but narrowly skilled workforce with primarily entry-level job skills. In contrast, the slower formal education system route may produce less work-ready graduates, but with a broader knowledge base, better critical thinking, and readiness for management level positions.

In conclusion, AI has the power to optimise both learning and teaching, to the benefit of both students and teachers. However, effective EdTech approaches cannot flourish without partnerships between public and private sector and without an enabling environment that includes digital infrastructure and digital skills of learners and teachers. Also, besides technical skills, AI governance, ethics and cybersecurity ought to be central components of the curriculum designed from beginner to advanced level. There is a growing need for educators, students, and policymakers to have a basic understanding of AI. This is essential to enable them to not only engage positively, but critically on the ethical implications of AI and data use in education and training.

Share


*Sylvia Mukasa is an award-winning entrepreneur. Sylvia is Founder/CEO of GlobalX Investments Ltd/GlobalX Innovation Labs which focuses on emerging technologies. She is passionate about empowering Women in Tech and contributing to the entrepreneurial ecosystem in Africa and globally. She is Country Co-Founder/Chapter Lead (Kenya) for Women in Tech Africa (WiTA). WiTA won the United Nations EQUALS in Tech Award, Leadership Category in 2018. She is a 2014 TechWomen Fellow, an Initiative of the U.S. Department of State's Bureau of Educational and Cultural Affairs. TechWomen empowers, connects, and supports the next generation of women leaders in STEM from Africa, Central and South Asia, and the Middle East; launched by Hillary Clinton, former US Secretary of State. Sylvia holds a certificate from the GIZ/ITCILO’s Leading with AI Lab. She is a member of the Gender Alliance, the Global Leadership Academy (GLAC) and a BMW Foundation Herbert Quandt Responsible Leader.


Next Event: 25th November @ 15.00 CET

Register here on Eventbrite

Register here on Eventbrite

Disruption in the fashion industry and its implications for e-commerce and manufacturing

By Emmanuel Ejeu*

(Source: Reverse Engineering Service)


Most people like to look good, which means shopping for clothing and other wearable accessories such as belts, shoes, and rings. Despite US shoppers preferring in-store shopping, a study conducted by the Raydiant’s 2021 State of Consumer Behavior Report found that 40% of respondents decreased their visits to stores due to the pandemic. Overall, respondents prefer to shop in physical stores because they like to see and feel products, and because they enjoy the overall experience of shopping in person. The in-store shopping experience provides satisfaction and assurance to customers that what they are shopping for is what they want. Some of the items appear more enticing when dressed on mannequins, which influences customers’ decisions.

Current online shopping experiences provided by the likes of Amazon, Jumia and eBay among others provide convenience and variety, but not customisation.  They currently lack features to boost consumer confidence in making a purchase or to provide the experience of trying on items.

Emerging technologies, including artificial intelligence (AI), could fill the gaps presented by both online and brick-and-mortar retailers. Technologies that could develop over time and provide a superior shopping experience include:

  1. Wearable mannequins and full body scanners (Personalised experience)

  2. E-commerce as a gateway to an artificially intelligent enabled apparel ecosystem (Start of experience)

  3. Fashion and design to provide customised shopping of design and materials (Customised experience)

  4. Decentralised manufacturing, where orders will be made on demand with the use of 3D printing (Execution)

  5. Logistics to complete the eco-system experience (End of experience)

1. Wearable mannequins and full-body scanners

Mannequins, also referred to as “dummies”, are plastic-looking models. They come in different sizes and are often dressed in shop windows to give an impression of how clothes would look on an actual person. However, they are not available in every size. Imagine how your shopping behaviour could change if you could have your own, personal mannequin. Using AI, swimsuit-like sensors, clothes are being developed that measure your body shape and height. With these measurements of a person’s body, digitally based mannequins can be created and put onto a shopping website where one could search for wearables that fit the mannequin. There are two ownership models for the wearable mannequin. Customers could either purchase their own wearable mannequin maker, which could be cheaper, or visit the nearby stores that have them to register their body sizes to their system dashboard.

Other solutions like using 3D body scanners and cameras can be substitutes for wearable mannequins. Companies like Intel using Real Sense and Body Labs, acquired by Amazon, among others continue to develop similar technologies. TechNovus, a UK company, recently announced an AI powered body measurement platform (see video). BodyGram is another player in the market, among others.

(Source: Robb Report) (Source: Naked Labs)

2.     E-commerce

E-commerce means using digital channels as a transaction means for products and services. An e-commerce system is meant to provide a centralised platform where different people access the various shops to transact. It can be referred to as a “gateway” to the entire apparel ecosystem as it is the point of interface for customers.  The system can come as a mobile app, website, and include virtual reality devices (e.g., Oculus) or 3D video devices. The e-commerce industry is, however, changing due to an increase in competitors and key players looking to provide the best customer experience to remain competitive. As such, key players are looking to diversification, innovations, and partnerships, which can be achieved through leveraging AI.

Artificially intelligent e-commerce systems are focused on the customer experience journey. These include AI technologies that enable clients to customise designs, logistics that focus on packaging till delivery, and technology that connects to manufacturing centres.

3.     Fashion and design

Fashion and design of wearables has traditionally been a domain of manufacturers apart from a select group of clients who request order customisations. However, design is now being made more accessible with emerging technologies where the clientele can choose already developed templates (free and premium) of the designs they love. Clients can also develop customised designs of what they prefer and make an order from the e-commerce system. Look at the design by Design Hill to have a feel of the concept. Artificially intelligent design systems are those that help users develop professional designs with limited technical knowledge.  AI design technology could help ease the transition from shopping already “designed and available” apparel to shopping by “non-existent but concept driven” apparel.

4.     Decentralised manufacturing through 3D printing technologies

Entities that manufacture different wearables also play a role in the transformation of the industry. 3D printing is a fast-growing technology being used in many industries to make sculptures, houses, cars, and prosthetics among others.

The manufacturing of apparel is currently a centralised activity, spanning production in factories to the transportation of finished products to stores or distribution centres. Decentralised manufacturing, on the other hand, focuses on giving power of manufacturing to agents (shopping stores or distribution centres).

Using new technologies, customers could shop for apparel of their choice using artificially intelligent e-commerce systems and have their orders instantly printed at their nearest agent location and delivered to their preferred location. There would be no need for industries and retailers to stock ready-made apparel at their stores as they would instantly be made on-demand using 3D printing technology. To prepare for this disruption in the apparel industry, manufacturing entities would have to intensify research and development, focus on leveraging 3D printing technology, as well as review their partnerships model. (Learn more about 3D printed wearable here.)

5.     Logistics

Logistics refers to the process of handling of the orders till the delivery to final clients. Some e-commerce players have partnered with courier firms, whereas others handle logistics in-house. It is crucial to have an interlinked logistics network to complete the customers shopping experience. This can be achieved through leveraging artificial technology.

Though not directly linked to the apparel products, logistics is crucial to the completion of the transaction a customer starts. This basically means that the customer should track their products right from printing of order, packaging, and transportation. It is worth noting that big logistics players such as UPS and DHL among others have decentralised models of business. Investments in drone technology for delivery of products is a field currently being explored.

Disruption in the fashion industry is inevitable and it is critical for various industry players to prepare and position themselves for the opportunities and threats presented by technology advancements to their industry. It is crucial for of e-commerce, design, manufacturing, logistics and full body scanning to be integrated to provide for a customer centric experience whilst providing long-term value to partners from respective industries.

Share


*Emmanuel Ejeu holds a Bachelor of Science in Computer Science and a Master’s in Business Administration from Makerere University. He also holds a certificate in Leading with Artificial Intelligence from the Global Leadership Academy in Partnership with ITCILO.

Emmanuel is an entrepreneur and seasoned consultant with experience in business analysis, technology, research, corporate governance, program management and organisational development. He is enthusiastic about futurism.


Next Event: 25th November @ 15.00 CET

Register here on Eventbrite

Register here on Eventbrite

Deepfakes: A threat to democracy?

By Rodrigue Anani *

(Image Source: Stephen Wolfram, CC BY-SA 4.0 via Wikimedia Commons)


The manipulation of images or videos is a very old practice. It is used to deceive or persuade viewers. In 1860, a photograph of the politician John Calhoun was manipulated, and his body was used in another photograph with the head of Abraham Lincoln.

Technology has made media (photo or video) manipulation easier and more difficult to detect. Tools such as Adobe Photoshop have made media manipulation more accessible, which has been accentuated with the progress in artificial intelligence (AI). The development in these fields makes it possible to create deepfakes, which use computer vision to create fake images or videos that look very real. IBM defines computer vision as a field of AI that enables computers and systems to derive meaningful information from digital images, videos, and other visual inputs and take actions or make recommendations based on that information.

The term “deepfake” was first coined in late 2017 by a Reddit User of the same name. The term is a combination of the words “deep” and “fake” because it uses deep learning to create fake images or videos. Deep learning tries to simulate the behaviour of the human brain, thus making it possible to learn from a large amount of data. A deepfake can easily make a person participate in an activity they have never participated in or make someone say something they never said. Even though creating deepfakes has been possible for some decades, new technologies have made it easier, simpler, and accessible to almost anyone.

As explained by Meredith Somers, a deepfake refers to a specific kind of synthetic media where a person in an image is swapped with another person’s likeness. As discussed by Sally Adee, a deepfake is created by training a neural network on many hours of real video footage of the target person to give it a realistic “understanding” of what he or she looks like from many angles and under different lighting. The trained network will be used together with computer graphics techniques to superimpose a copy of the person onto a different actor. 

Yisroel Mirsky and Wenke Lee have identified four categories of deepfakes in the context of human visuals. They are re-enactment, replacement, editing, and synthesis.

Re-enactment is when a person’s facial expression or movement is used to replicate the facial expression or movement of another’s. Re-enactment gives attackers the possibility to impersonate the identity of another person, controlling what he or she does.

Replacement is when one person’s face is replaced with another’s. Generally, the victim’s face is swapped on the body of another person in a compromising situation with the purpose humiliate, defame, or blackmail.

Editing and synthesis involve changing the attributes of a person: to make the person look younger or older, or even change their ethnicity for instance.

Re-enactment and replacement deepfakes are great sources of concern because of their potential to cause harm. The face of a politician or an influential person could be re-enacted to say something they never said. A person’s face could be replaced in an incriminating video mostly for blackmailing.

The State of Deepfakes 2020 report states that: “non-consensual and harmful deepfake videos crafted by expert creator are now doubling roughly every six months. The number of deepfake videos detected up to December 2020 amounts to 85,047.” According to Giorgio Patrini, CEO and co-founder of Sensity, reputation attacks by defamatory, derogatory, and pornographic fake videos still constitute the majority of deepfakes by 93%. Only 7% of deepfake videos were made for comedy and entertainment.

Even though there is a lot of debate on the negative side of deepfakes, it is worth mentioning that they can also have positive applications. As discussed by Ashish Jaiman, deepfakes can be used to bring historical figures back to life for a more engaging and interactive classroom. The Dali Museum in St Petersburg, Florida brought back to life the surrealist painter Salvador Dali with a deepfake. During the exhibition called Dali Lives, deepfakes made it possible for visitors to interact with him and even take a selfie with him. Deepfakes can also be used in audio storytelling and book narration. Imagine listening to a book with the voice of the actual author who wrote it. Deepfakes can also help enhance freedom of speech in dictatorial and oppressive regimes. Journalists, human rights activists can publish without fearing that their voice or face will be recognised and identified.

In 2020, over 3.6 billion people were using social media worldwide. As of February 2021, 76% of adults in Kenya, 72% of adults in Malaysia, 61% adults in Turkey, and 47% of adults in Sweden use social media as a source of news. In a world where social media and the Internet is frequently the main source of information, audiences are at higher risk than ever of encountering and sharing fake news. As Amy Watson described, “Every day, consumers all over the world read, watch or listen to the news for updates on everything from their favourite celebrity to their preferred political candidate, and often take for granted that what they find is truthful and reliable.”

With social media platforms offering little to no fact-checking and with technology rapidly evolving, it is becoming very difficult to detect deepfake videos. This combined with the current challenges when it comes to misinformation poses a serious threat to democracies, particularly emerging democracies. What would happen when an influential political leader in a video posted on social media asks their followers to take some regrettable actions? Deepfakes pose a serious challenge to democracy and contribute to the erosion of trust in institutions among others. As U.S. Senator Marco Rubio said,

In the old days if you wanted to threaten the United States, you needed 10 aircraft carriers, and nuclear weapons, and long-range missiles. Today... all you need is the ability to produce a very realistic fake video that could undermine our elections, that could throw our country into tremendous crisis internally and weaken us deeply.

These dangers and threats have been summed up in a report by the Brookings Institution.

To sum it up, we are living in a world where seeing is no more believing, and that is deeply worrying and alarming. Our democracies are under attack, and we must do what is needed to protect them. As deepfakes are evolving, so too must the methods and techniques to counter them.

Share


* Rodrigue Anani is a software engineer who has over five years of experience. Rodrigue is open-minded, pragmatic, and has a keen interest in building world-class solutions that have a positive impact on genuine and sustainable development. Rodrigue holds a Bachelor of Science in Information Technology from BlueCrest College, Ghana. He also holds a certificate in the “Internet of Things” delivered by the GSMA, and a certificate in Leading with Artificial Intelligence delivered by the Training Center of the International Labour Organisation and the Global Leadership Academy (GLAC). Mr. Rodrigue Anani has working experience and knowledge both in West Africa and North Africa countries.


Next Event: 25th November @ 15.00 CET

Register here on Eventbrite

Register here on Eventbrite

Loading more posts…