cyber bullying

How to Stop Cyber Misogyny

by Suzie Dunn

Since the beginning of the COVID pandemic, many parts our lives have migrated online. We’re using Zoom video calls, WhatsApp group chats, iCloud and Instagram to keep in touch with our loved ones and to work from home. This technology has been an incredible benefit at a time when the circumstances of the world have kept us apart.

 

Unfortunately, the same technologies that keep us connected are increasingly being used as tools for harassment, stalking and other forms of abuse. The United Nations has documented an increase in technology-facilitated gender-based violence during the pandemic in its 2021 report The Shadow Pandemic.

 

We’ve seen instances of “Zoom bombing,” in which uninvited guests have interrupted virtual meetings with racist or sexually explicit abuse. Reports of “revenge porn” to the U.K. Revenge Porn Helpline and the eSafety Commissioner in Australia have significantly increased. And in 2020, Statistics Canada reported a rise in police reports of crimes in digital spaces, including the non-consensual distribution of intimate images, uttering threats, child pornography, criminal harassment and indecent or harassing communications.

 

Technology-facilitated gender-based violence is a relatively new phenomenon. It includes a wide variety of behaviour, ranging from the non-consensual distribution of intimate images (sometimes called “revenge porn”), to cyberstalking and online harassment. It can include hate speech, as well as rape and death threats, impersonation, trolling, sexual extortion, voyeurism and doxing (publishing someone’s personal information online without their consent).

 

Various technologies are used for multiple forms of abuse. “Smart home” technology has been secretly used by abusers to eavesdrop on women, and in some cases, torment them by changing the temperature of their homes or unexpectedly blaring music from their smart home speakers. Similarly, “stalkerware” technology is used to track women’s text messages and GPS locations.

 

More commonly, social media is used to target women with hateful and sexist messages, including rape and death threats—particularly those in leadership positions and who are Black, Indigenous, women of colour or LGBTQ+. These forms of violence are a growing problem impacting women, girls, those who are non-binary, agender and gender variant worldwide. A soon-to-be published study by the Centre for International Governance Innovation on online gender-based violence surveyed people from 18 countries and found widespread reporting of such violence, particularly among members of the LGBTQ+ community.

 

Other international research, such as the Toxic Twitter report by Amnesty International and the Free to be Online? report by Plan International, documents how this abuse leads to the silencing of women and girls. When people face technology-facilitated gender- based violence, one of the most common responses is to engage less online, particularly on issues of gender and racial equality. This poses a serious social problem as the internet plays a central role in public dialogue, and all people should be free to share their views without risk of harassment, intimidation and violence.

 

Although a seemingly “novel” form of violence, technology-facilitated harassment and intimidation is simply a new manifestation of older forms of gender-based violence. Cynthia Khoo, author of the recent Women’s Legal Education and Action Fund (LEAF) report, Deplatforming Misogyny, notes that this form of violence is “part of the continuum of violence, abuse, and harassment that women and girls face in the world, regardless of technology.” Stalking, sexual exploitation and threats are all too familiar to survivors of gender-based violence.

 

Technology provides abusers with a new avenue to engage in these abusive behaviours. Unfortunately, there are some particularly invasive aspects to this form of violence that create greater risks. This is because technology can both amplify abuse and make such behaviour easier to conduct. Through text, email and social media, perpetrators easily obtain ongoing access to their targets. They can reach them day and night and the harassment and loss of privacy and security often feels relentless and unescapable. Because most people have some online presence, it is relatively easy for abusers to find information about their targets and track their activities. The ease of sharing information through apps, emails and social media—including direct messages (DMs)—has led to the spread of sexual images, as well as reputation-destroying lies, to a person’s personal contacts and beyond.

 

The abuse that women, girls and others face online has real-world impacts on their offline lives. In the case of intimate-partner violence, technology-facilitated violence is often used in tandem with other forms of violence in the relationship. In their 2017 study, Digital Technologies and Intimate Partner Violence: A Qualitative Analysis with Multiple Stakeholders, Cornell University researchers Karen Levy and Diana Freed found that abusive intimate partners easily gain access to their partners’ devices, such as cellphones, social media accounts and personal information as a means to undertake online stalking and digital privacy invasions. Everyday apps such as Find My iPhone and iCloud can be used to track a person’s location or monitor their communication.Password security questions, like the name of your first pet, are easily guessed by intimate partners who know the personal details of their partners’ lives.

 

While the use of technology by perpetrators of violence has quickly become ubiquitous, social media companies have been slow to address this growing problem. A recent report by the Wall Street Journal on Facebook’s content-moderation practices showed examples of certain high-profile Facebook users being exempt from their content-moderation practices. This allowed a Brazilian soccer player to upload posts that included images of a woman who accused him of sexual assault. Normally such content would be quickly removed, but because of the special rules applied to this valuable Facebook and Instagram user, 56 million users saw the images before they were removed.

 

The issue of technology-facilitated violence first came to widespread public attention in Canada in the 2010s when two teenage girls had sexual images of themselves taken without their consent and then published online. Both girls were harassed by their peers because of the photos, which quickly spread via text messages and social media.

 

Amanda Todd, a 15-year-old girl from British Columbia, was tricked into showing her breasts over webcam to an adult man who had a history of extorting young girls online. He took a screenshot of her breasts and threatened to post them online if she didn’t show him more. He later sent the photo to her family and friends and used it as his Facebook profile photo.

 

When her photo was shared, Todd was taunted with cruel names by her classmates. In 2012, Todd posted a YouTube video about her ordeal and died by suicide a few weeks later. The video went viral after her death, bringing international attention to the extortion and ongoing harassment she faced. The Dutch man accused of sharing Todd’s images was extradited to Canada in 2020 to face charges of extortion, criminal harassment, child luring and child pornography.

 

Rehtaeh Parsons, a 15-year-old student from Dartmouth, Nova Scotia, reported that she had been raped at a party in 2011. One of the boys took a photo of the incident in which a girl identified as Parsons is leaning out a window, vomiting. The image was shared around the school and Parsons faced relentless harassment.

 

She switched schools three times. Her father, Glen Canning, recently authored, with Susan McClelland, My Daughter Rehtaeh Parsons, a book detailing the ongoing online attacks is daughter suffered, and the lack of support she received from schools, the community and the police. Rehtaeh died by suicide at 17, and, like Amanda Todd, her story brought international attention to the ways in which girls have not only been sexually violated and had their images shared without their consent, but subsequently blamed and shamed online.

 

These stories share the victim-blaming narrative that is familiar to many survivors of sexual assault. Todd and Parsons were labelled “sluts” and told they deserved the harassment they received. Some harassers even told them they should kill themselves. Like other forms of gender-based violence, there are many reports of police not taking complaints of technology-facilitated violence seriously. An independent report into the conduct of the Crown prosecutor’s office in the Parsons’ case found several errors in the police response and recommended that the Halifax police and RCMP develop stronger policies to deal with sexual assault and child sexual abuse.

 

Media attention around what happened to these two girls after their deaths ignited public demand for a legal response. Various laws were introduced, including criminalizing the non-consensual sharing of intimate images. Laws across several provinces now allow people whose sexual images are shared without consent to sue the people who shared them. Most recently, the federal Liberal government has proposed a framework that would require social media companies to remove certain forms of “online harms,” including child sexual abuse material, hate speech and the non-consensual distribution of intimate images.

 

The challenge with many of these existing laws, and the proposed online harms framework, is that they are relatively inaccessible for many people and do not provide the kinds of response many survivors say they need. Civil suits are too expensive for the everyday person. Criminal cases are complex, can take several years to resolve and can add increased stress to the survivor, particularly if the trial brings unwanted media attention.

 

The proposed online framework includes mandatory police reporting, which removes choice from the hands of the survivor who may not want a criminal law-based solution. That is not to say that there isn’t a need for criminal and civil responses to technology-facilitated gender-based violence.

 

For those who want to seek formal legal supports, there should be legal avenues. Existing criminal laws on harassment, threats and voyeurism, as well as civil laws on defamation and privacy invasions, should apply equally when committed online and should be taken seriously by legal institutions. However, these laws must be improved to provide more accessible solutions and be expanded to enable survivors to obtain non-legal, alternative types of support as well.

 

Fortunately, there are many solid ideas on how to do this. Professor Hilary Young from University of New Brunswick and professor Emily Laidlaw from the University of Calgary have proposed an updated model for the non-consensual distribution of intimate images that would provide a faster and more accessible takedown mechanism. LEAF has outlined what a federal body to support survivors of this violence could look like in its 2021 Deplatforming Misogyny report. Such a body would require social media companies to take down abusive content, provide accessible, government-funded supports in getting content taken down and fund anti-violence organizations and education campaigns to address technology-facilitated violence.

 

In Nova Scotia, the provincial CyberScan unit provides some of these supports to victims of online violence. The unit helps survivors understand their legal rights, provides informal supports and gives educational presentations in schools. A 2021 report by Dalhousie postdoctoral fellow Alexa Dodge on this unit found that most people who contacted CyberScan wanted help in getting online content removed, as well as help obtaining emotional and informational support. Some wanted help with navigating their legal options, but many cases were able to be resolved through informal supports by the CyberScan unit. Most complainants didn’t want a legal response if it could be solved informally. Dodge made several recommendations, including providing more robust informal responses and improving CyberScan’s educational model to be informed by best practices that focused on healthy relationships, consent and empathy.

 

Throughout most of Canada, there continues to be a lack of information for survivors who need technical support (such as having material removed online) or who want to take legal action. Anti-violence organizations that provide supports for survivors of gender-based violence are also in need of training to provide both information and support to victims of technology-facilitated gender-based violence. The BC Society of Transition Houses is one of the leading organizations in Canada providing such information. Its Technology Safety project website provides technology safety resources, a preserving-digital evidence tool kit and information on legal remedies for people experiencing technology-facilitated violence. Such resources are clearly needed nationwide.

 

We have a long way to go in addressing technology-facilitated violence. We need more accessible options for content removal and a greater focus on victim-centred and trauma-informed responses. Government-funded organizations like CyberScan could be implemented nationally and anti-violence organizations must be provided the resources and supports needed to provide safety planning, and both legal and informal supports. Most importantly, we need to believe survivors, take their complaints seriously and keep pushing for supports that will hold abusers and social media companies accountable.

 

We must uphold the rights of people to live free of violence and ensure technology is not used to promulgate sexual, racial and other forms of attacks. ▼

If you liked this article, subscribe to Herizons and get your very own copy delivered quarterly, chock-full of inspiring stories of social justice in Canada and beyond.