Challenging Abuse and Violence on Social Media Through AI
By ElsaMarie D’Silva*
(Image: Unsplash | Dole777)
Several years ago, a woman journalist shared with me one of the rape threats she had received on Twitter because of her work. Not only did the person threaten her with rape, but also told her that he knew where she lived. This direct threat made her fearful and she filed a police complaint. All the police advised her to do was to move to a friend’s place for a couple of weeks, which she did. She was upset as being displaced from her home which was a safe haven. It was not an ideal solution, but she didn’t want to be at risk should the man follow up on his threat.
This story is not unique. Many women journalists, politicians, activists and influencers are subject to hate and violence on social media platforms. Many believe it is easier to type in hateful messages like rape and death threats on social media compared with a physical setting.
A survey conducted by the International Center for Journalists and the United Nations Educational, Scientific and Cultural Organization (UNESCO) of 1,210 international media workers found that 73% of the female respondents had experienced online abuse, harassment, threats and attacks. What was further distressing is that 20% of these women reported that they had also been targeted offline with abuse and attacks.
A recent study of the 2020 US election found that women and women candidates from an ethnic minority background were more likely than white men to receive abusive content on mainstream social media platforms (e.g. Facebook and Twitter).
Global movements like #NotTheCost, #NameItChangeIt, #ReclaimTheInternet, #ByteBack have highlighted violence against women online, making it more visible and helping to shift the narrative about it with social media companies and government legislators. Yet, despite these campaigns, social media organisations are still not doing much to address these problems in a comprehensive and effective way.
Recently, my organisation, Red Dot Foundation, which works on preventing violence against women and girls through our crowdmapping platform Safecity, was invited by a social media company to attend a training on priority channels for reporting online harassment and violence both as a prevention and reactive measure. Whilst non-profits can be roped in to aid these companies in flagging off malicious and inappropriate content, I believe Artificial Intelligence (AI) can aid in these efforts more efficiently and without putting the burden on organisations that are often already overstretched and underfunded.
There are several examples of AI-powered social monitoring and listening tools to gauge people’s consumption preferences, or even serve as early warning systems for large scale violence. On Twitter, for example, if you were to retweet content involving a news article, you are prompted to read it first. During COVID19, on Facebook, Instagram and Twitter, if you post about the pandemic, a pop up alerting and prompting the reader to educate themselves with local health information appears.
So, if social media companies are already designing tools to help one reflect on certain aspects of what one is posting, why can’t they do it for online gender-based violence? The UN Secretary General Antonio Guterres called gender-based violence a shadow pandemic to the COVID19. The WHO confirms that this pervasive violence affects one in three women around the world. The virtual space is another channel for this violence to be directed towards women and girls.
Furthermore, why should social media companies wait for an abusive post to appear online, create panic and cross boundaries before it is attended to? Why not invest in preventive tools where words that involve rape, sexual assault, etc. can initiate a prompt for the user to look at with a warning to not post malicious, harmful and often criminal content? These could be a digital version of the “Bell Bajao”, or Ring the Bell, campaign by Breakthrough where they encourage interrupting domestic violence with a simple action of ringing the doorbell. Nudges like these give a person time to pause and reflect. For example, when a neighbour rings the doorbell in the midst of a domestic violence situation, the abuser knows that there are other witnesses to the incident and who could report him. Online nudges might distract a person with information or educate them on legislation that might label the act a crime, prompting them to refrain from posting.
Some years ago, activist Soraya Chemaly and a few others set up a system to tackle pages on Facebook, some with names like “Raping your Girlfriend”, when the company failed to respond to complaints to take them down. They encouraged people to take screenshots of these pages, tweet at the companies whose ads appeared on those pages and publicly shamed them for hosting their ads on offensive pages. The companies would then withdraw their ads until Facebook was pressured to remove them. This manual action by several activists and organisations can easily be done with AI tools.
As we think of building back a better world post COVID19, we need to use the resources available to end gender-based violence. We have the technology and tools to design such interventions which are preventive rather than reactive, it is time to start using them.
* ElsaMarie D’Silva is the Founder of Red Dot Foundation (India) and President of Red Dot Foundation Global (USA). Its platform Safecity, crowdsources personal experiences of sexual violence and abuse in public spaces. ElsaMarie is a 2020 Gratitude Network Fellow, 2019 IWF Fellow and a Reagan Fascell Fellow, a 2018 Yale World Fellow and an alumni of the Stanford Draper Hills Summer School, the US State Department’s Fortune Mentoring Program, Oxford Chevening Gurukul and the Duke of Edinburgh’s Commonwealth Leadership Program. She is also a fellow with Rotary Peace, Aspen New Voices, Vital Voices and a BMW Foundation Responsible Leader. She co-founded the Gender Alliance which is a cross-network initiative bringing together feminists from the BMW Foundation Herbert Quandt's Responsible Leaders Network, the Global Diplomacy Lab, the Bosch Alumni Network and Global Leadership Academy Community (by GIZ). She is listed as one of BBC Hindi’s 100 Women and has won several awards including Government of India Niti Aayog’s #WomenTransformingIndia award and The Digital Woman Award in Social Impact by SheThePeople. In 2017, she was awarded the Global Leadership Award by Vital Voices in the presence of Secretary Hillary Clinton. She is also the recipient of Gold Stevie Award for Female Executive of the Year - Government or Non Profit -10 or Less Employees in 2016.
Announcement: Summer Schedule
Dear Readers,
For the months of July and August, the Leading with AI newsletter will move to a bi-weekly schedule. Our next issue will follow on Thursday (as usual) 9th September. Wishing everyone a safe and healthy summer.
Warm wishes,
The Leading with AI Team