Deepfakes: A Threat to Democracy?
By Rodrigue Anani *
(Image Source: Stephen Wolfram, CC BY-SA 4.0 via Wikimedia Commons)
The manipulation of images or videos is a very old practice. It is used to deceive or persuade viewers. In 1860, a photograph of the politician John Calhoun was manipulated, and his body was used in another photograph with the head of Abraham Lincoln.
Technology has made media (photo or video) manipulation easier and more difficult to detect. Tools such as Adobe Photoshop have made media manipulation more accessible, which has been accentuated with the progress in artificial intelligence (AI). The development in these fields makes it possible to create deepfakes, which use computer vision to create fake images or videos that look very real. IBM defines computer vision as a field of AI that enables computers and systems to derive meaningful information from digital images, videos, and other visual inputs and take actions or make recommendations based on that information.
The term “deepfake” was first coined in late 2017 by a Reddit User of the same name. The term is a combination of the words “deep” and “fake” because it uses deep learning to create fake images or videos. Deep learning tries to simulate the behaviour of the human brain, thus making it possible to learn from a large amount of data. A deepfake can easily make a person participate in an activity they have never participated in or make someone say something they never said. Even though creating deepfakes has been possible for some decades, new technologies have made it easier, simpler, and accessible to almost anyone.
As explained by Meredith Somers, a deepfake refers to a specific kind of synthetic media where a person in an image is swapped with another person’s likeness. As discussed by Sally Adee, a deepfake is created by training a neural network on many hours of real video footage of the target person to give it a realistic “understanding” of what he or she looks like from many angles and under different lighting. The trained network will be used together with computer graphics techniques to superimpose a copy of the person onto a different actor.
Yisroel Mirsky and Wenke Lee have identified four categories of deepfakes in the context of human visuals. They are re-enactment, replacement, editing, and synthesis.
Re-enactment is when a person’s facial expression or movement is used to replicate the facial expression or movement of another’s. Re-enactment gives attackers the possibility to impersonate the identity of another person, controlling what he or she does.
Replacement is when one person’s face is replaced with another’s. Generally, the victim’s face is swapped on the body of another person in a compromising situation with the purpose humiliate, defame, or blackmail.
Editing and synthesis involve changing the attributes of a person: to make the person look younger or older, or even change their ethnicity for instance.
Re-enactment and replacement deepfakes are great sources of concern because of their potential to cause harm. The face of a politician or an influential person could be re-enacted to say something they never said. A person’s face could be replaced in an incriminating video mostly for blackmailing.
The State of Deepfakes 2020 report states that: “non-consensual and harmful deepfake videos crafted by expert creator are now doubling roughly every six months. The number of deepfake videos detected up to December 2020 amounts to 85,047.” According to Giorgio Patrini, CEO and co-founder of Sensity, reputation attacks by defamatory, derogatory, and pornographic fake videos still constitute the majority of deepfakes by 93%. Only 7% of deepfake videos were made for comedy and entertainment.
Even though there is a lot of debate on the negative side of deepfakes, it is worth mentioning that they can also have positive applications. As discussed by Ashish Jaiman, deepfakes can be used to bring historical figures back to life for a more engaging and interactive classroom. The Dali Museum in St Petersburg, Florida brought back to life the surrealist painter Salvador Dali with a deepfake. During the exhibition called Dali Lives, deepfakes made it possible for visitors to interact with him and even take a selfie with him. Deepfakes can also be used in audio storytelling and book narration. Imagine listening to a book with the voice of the actual author who wrote it. Deepfakes can also help enhance freedom of speech in dictatorial and oppressive regimes. Journalists, human rights activists can publish without fearing that their voice or face will be recognised and identified.
In 2020, over 3.6 billion people were using social media worldwide. As of February 2021, 76% of adults in Kenya, 72% of adults in Malaysia, 61% adults in Turkey, and 47% of adults in Sweden use social media as a source of news. In a world where social media and the Internet is frequently the main source of information, audiences are at higher risk than ever of encountering and sharing fake news. As Amy Watson described, “Every day, consumers all over the world read, watch or listen to the news for updates on everything from their favourite celebrity to their preferred political candidate, and often take for granted that what they find is truthful and reliable.”
With social media platforms offering little to no fact-checking and with technology rapidly evolving, it is becoming very difficult to detect deepfake videos. This combined with the current challenges when it comes to misinformation poses a serious threat to democracies, particularly emerging democracies. What would happen when an influential political leader in a video posted on social media asks their followers to take some regrettable actions? Deepfakes pose a serious challenge to democracy and contribute to the erosion of trust in institutions among others. As U.S. Senator Marco Rubio said,
In the old days if you wanted to threaten the United States, you needed 10 aircraft carriers, and nuclear weapons, and long-range missiles. Today... all you need is the ability to produce a very realistic fake video that could undermine our elections, that could throw our country into tremendous crisis internally and weaken us deeply.
These dangers and threats have been summed up in a report by the Brookings Institution.
To sum it up, we are living in a world where seeing is no more believing, and that is deeply worrying and alarming. Our democracies are under attack, and we must do what is needed to protect them. As deepfakes are evolving, so too must the methods and techniques to counter them.
* Rodrigue Anani is a software engineer who has over five years of experience. Rodrigue is open-minded, pragmatic, and has a keen interest in building world-class solutions that have a positive impact on genuine and sustainable development. Rodrigue holds a Bachelor of Science in Information Technology from BlueCrest College, Ghana. He also holds a certificate in the “Internet of Things” delivered by the GSMA, and a certificate in Leading with Artificial Intelligence delivered by the Training Center of the International Labour Organisation and the Global Leadership Academy (GLAC). Mr. Rodrigue Anani has working experience and knowledge both in West Africa and North Africa countries.
Next Event: 25th November @ 15.00 CET
Register here on Eventbrite
Register here on Eventbrite