Online Abuse Against Women: Where Does it Stem From?
Trigger warning: This article contains discussion and examples of hateful speech, online abuse, racism and topics surrounding gender-based hate and violence.
The rise of the online world has, in many ways, changed the real one. From Netflix to 24-hour news, the functioning of our everyday lives has been transformed by technology over the last two decades.
However, in other instances, the internet has provided a new platform to refract and reflect old realities, rebirthing societal issues into a new digital dimension.
This, regrettably, is the case for gender-based online abuse. As patriarchal society collides head-on with the internet, used as a vehicle for violence, we see that women are disproportionately affected by this form of abuse.
According to social Think Tank, Demos, 26% of posts about females on Twitter were abusive compared with 14% of those directed at men. This was the result of a study carried out in October 2021. The study analysed 90,000 posts and comments about reality TV stars from both ‘Love Island’ and ‘Married at First Sight’.
Ellen Judson, the study’s co-author who focuses on social media policy, says reality TV acts as a microcosm for social dynamics on the whole. She said: "We see that the contestants are a relatively equal mix of men and women - and from lots of different backgrounds - so it gives us an opportunity to analyse those differences in how the public are responding to them."
While the study only had access to public posts, not including the more hostile abuse that is most often sent through direct messages, it found that gender-based online abuse exists on a spectrum. It can range from comments about committing violent sexual acts to those that are seemingly less offensive but are embedded in misogynistic tropes, such as picking at someone’s appearance or enforcing harmful stereotypes.
Similarly, abuse was found to be amplified along intersectional lines with race and sexuality compounding the gender-based hate. This goes beyond reality TV stars. In the run up to the 2017 elections, over half of abusive tweets to MPs were directed at one black female MP, Diane Abbott, where she was targeted with misogynist and racist hate.
The BBC’s disinformation reporter, Marianna Spring, writes about the extreme abuse she has received since starting the role. Working with BBC Panorama, she spearheaded an experiment that sought to understand who it is behind the screens, armed with keyboards and shooting hate into the vast, unforgetting cyberspace.
With the support of social media experts, she created a fake troll, named Barry, with accounts set up across multiple social media platforms. Much like her own trolls, Barry was mainly interested in anti-vax content and conspiracy theories, following a small number of anti-women content. To start, ‘Barry’ posted some abuse on his profile but never messaged any women directly.
The results were deeply concerning. After just one week, the top recommended pages on both Facebook and Instagram were almost all misogynistic. By the end of the experiment, Barry was being guided by the luminescent lights of social media down a dark rabbit-hole of anti-women content, some of which included sexual violence, disturbing memes about sex acts and content condoning rape, harassment and gendered violence.
And while this rabbit-hole may be as perverted as the Mad Hatter is fictitious, its daily drip-feed has a powerful radicalising effect.
In Marianne’s experiment, Barry was shown content that referenced extreme ideologies including that of the ‘incel’ movement. According to The Week, the term ‘incel’ combines the words ‘involuntary celibate’, describing an internet subculture for men who have developed a hatred for women, believing that feminism has resulted in male oppression. However, the incel movement differs from the Men’s Right Movement in that their ideology is specifically sexual and has arisen from the men feeling excluded from fulfilling their desire to have sex, date or establish relationships with women, usually because of their physical appearance. It’s been linked to several acts of violence, such as the shootings in Plymouth in August 2021 where five people were murdered, including the perpetrator’s mother.
Additionally, a survey conducted by the US National Network to End Domestic Violence found that ‘97% of domestic violence programs reported that abusers use technology to stalk, harass and control victims’. Again, this demonstrates the weight behind online threats, presenting the terrifying reality that online and offline abuse are not at all isolated, but are inherently and dangerously connected.
97% of domestic violence programs reported that abusers use technology to stalk, harass and control victims
As 2021 saw a series of protests over women’s safety, following the abduction, rape and murder of Sarah Everard and Sabina Nessa, among other women, it begs the question whether enough is being done to tackle gender-based online abuse.
While online abuse is a criminal offence under the Communication Act 2003, data obtained from several police forces by a Panorama Freedom of Information request showed the reporting of online hate has more than doubled in the past five years. However, there was only a 32% increase in the number of arrests. The victims were mostly women.
Marianne, a victim of online threats, said she was ‘frustrated’ by the way that the police handled her concerns that trolls would turn up at her workplace. She spoke to the police initially in April and, after being batted between several different people, she was only referred to a specialist team in August.
Although police have a large role to play in ensuring the safety of women, they are only part of the picture. The tech giants that pull the invisible strings behind social media platforms are the bodies that have the real power to bring online hate under control.
Many of these tech firms claim that their sites already monitor and ban accounts that have content reported as abusive; Facebook says ‘protecting’ its community is ‘more important than maximising profits’. However, the Centre for Countering Digital Hate found that 97% of 330 accounts sending misogynistic abuse on Twitter and Instagram remained on the site after being reported.
97% of 330 accounts sending misogynistic abuse on Twitter and Instagram remained on the site after being reported.
What’s more, the fake troll ‘Barry’ had originally been interested in conspiracy theories and anti-vax articles. Therefore, it was expected that he would be inundated with this kind of content, but he was exposed to misogynistic content instead.
Social media sites have come under increasing pressure not to promote misleading information about vaccines and the pandemic. This goes to show that it is within their power to regulate the circulation of information, to square the circle. But why hasn't the same happened with misogynistic content?
Julie Posetti, who is a lead researcher with UNESCO, said: ‘We would like to see gender-based online violence treated at least as seriously as disinformation has been during the pandemic by the platforms.’
UNESCO research has helped advise the UN proposals to tech companies that were shared exclusively with BBC Panorama. According to the BBC, the draft proposal calls for social media platforms to introduce labels for accounts that have previously sent misogynistic abuse. They want more human moderators taking the decisions about offensive material and an early warning system for users if they think online abuse could escalate into real world harm.
These guidelines are in line with the findings by Demos. Demos concludes that online abuse is too complex an issue to be solved simply through the banning of abusive accounts. The inability to define ‘online abuse’ as a consistent, distinct category makes banning accounts, based on algorithms, ineffective: abuse would be missed and legitimate speech would be censored. This can be especially problematic in the murky ‘freedom of speech’ debates that hover over social media platforms.
Instead, Demos said that the solutions must be systemic, ‘requiring platforms to make changes to their communities and spaces, not just to their content’. This may be through empowering communities or changing algorithms to create levelled, balanced conversation.
While this may sound idealistic, Tristan Harris, founder of the Centre for Humane Technology, explains that the current advertising–based social media model means that the platforms are designed to maximise engagement time. To this end, algorithms actively curate an atmosphere where antagonism, controversy and sensationalism are amplified, encouraging discussion to quickly transform into attacks and abuse. However, he emphasises that, while this is the current default setting, it does not have to be. Social media does not have to be synonymous with online hate.
Much as dark alleyways or male-dominated boardrooms change how women navigate themselves in the physical world, the fight against online hate is also a fight against the silencing of women, their gradual pushing-out from a space, be it a virtual one, and a community where they can express themselves.
Demos explains that a vicious cycle permeates the issue. As online abuse is commonplace, it is seen as predictable and therefore avoidable or manageable. Once again, women become resigned to adapting their behaviour and presence online to avoid abuse.
Scottish politician, Ruth Davidson, a target of misogynistic and homophobic abuse, said:
"I think we have to challenge, I don't think that it is in anybody's interests for women who are consistently abused, in a way that a man wouldn't be, to let other young women who are online and seeing the abuse to think that's just the way things are."
As police forces and tech firms awaken to the reality of online abuse, while stepping up to the subsequent challenges, it is important to break this vicious cycle, not allowing online abuse to monopolise the digital world with its power to silence and erase people.
Edited by Olena Strzelbicka
Researched by Larissa Cuturean