top of page
  • Cameron Brown & Katrina James

Fake News and the Free Speech Debate

Updated: Dec 14, 2020

We’ve all heard of fake news. And we’ve probably all seen it, whether we’re aware of it or not.


Article title graphic on fake news and free speech

Most of us would rightly associate the spread of fake news with social media, which allows stories to spread rapidly across huge audiences.


A prime example of this is the hoax story of the 2016 presidential election that the Pope had endorsed Donald Trump. According to BuzzFeed, it had almost 1,000,000 Facebook engagements. We will never know the true impact of it, but it’s safe to say that it didn’t hurt Trump’s campaign.


Another more recent case is the number of false articles with fake advice of what to do during the pandemic. The UK government has had to go as far as creating a rapid response unit to work with social media outlets to remove the influx of harmful content such as false medical information and phishing scams. The unit is dealing with over ten different fake stories being spread each day.


Fake news and free speech

Since the rapid response unit was only created to curb the spread of misinformation with potentially harmful impacts on the population’s health, it hasn’t yet been extended to tackle fake news beyond stories concerning Covid-19.


Other than this, there have been a number of solutions proposed to tackle the spread of fake news. These include flagging potentially misleading content on social networks, introducing compulsory ID checks for social network users, social media bans for certain culprits and making intentional misinformation a criminal offence.


Social media giants have been slow to curb the spread of fake news. As much as companies say they will regulate or use AI to find fake news, for the moment it is mainly just white noise. So, what have they actually done about it? And how do their rules affect freedom of speech?


 

Facebook


Facebook and the rise of fake news

In 2019, Facebook announced plans to tackle fake news and misinformation ahead of the UK’s general election. Measures included removing fake accounts and reducing the reach of articles that had been debunked by independent third-party fact checkers.


According to a New Scientist article, these changes will have had a limited impact, since Facebook changed its advertising policy. Previously, the policy banned any advertising containing “deceptive, false, or misleading content”. Now the policy only prohibits “ads that include claims debunked by third-party fact checkers”.


This is a problem for two reasons:

  • There are not enough fact-checkers to sift through the 1 billion pieces of content posted every day (Full Fact, the UK’s largest fact-checking agency and the bigger of Facebook’s two factchecking partners, has 10 members of staff).

  • Political advertising is not fact checked due to Facebook’s “fundamental belief in free expression” (we’ll come back to this point later) and “respect for the democratic process” which leads them to believe that it is not their role to verify what politicians say.


Political advertising is the sort of content which is most likely to have an impact on the way things are run. If people believe false claims made about other political candidates, it is likely to have an impact on how much they are trusted and therefore on their success during elections. Equally, false claims about a campaign may overstate the success a politician or political party has had, which could increase their popularity on false grounds.


Facebook’s decision to not fact check political claims essentially allows politicians to make statements they know are not true, because they are able to do so without facing any consequences from such a large platform. Their “respect for the democratic process” assumes that all politicians are trustworthy. As we well know, not all of them are.

Whilst their decision to ban political advertisement in the week before the presidential election in November is a step in the right direction, the ban only affects adverts that will be submitted after the 27th October. Political ads submitted before then will still run, and advertisers will still be allowed to adjust the targeting on those ads, so they will still reach the people they want to reach.


Based on Facebook’s existing policy, those adverts can also contain lies or misinformation. The only thing this policy will prevent are political ads about last-minute issues arising in the final seven days of the campaign. Additionally, this says nothing about the fact that by the time the ban comes into place, millions will have already voted. In fact, the number is closer to 80 million according to analysis by The New York Times.


 

Twitter


Twitter and the rise of fake news

Twitter takes action based on three categories:

  • Misleading information — “statements or assertions that have been confirmed to be false or misleading by subject-matter experts, such as public health authorities.”

  • Disputed claims — “statements or assertions in which the accuracy, truthfulness, or credibility of the claim is contested or unknown.”

  • Unverified claims — “information (which could be true or false) that is unconfirmed at the time it is shared.”


In September 2020, Twitter announced more detailed rules to tackle misinformation relating to the US election, which is likely to set them at odds with their most famous user. Twitter is known as Trump’s preferred social media platform, and he regularly uses the site to express his thoughts and to counter other bad press by confusing the narrative around negative stories. In May, he signed an executive order to remove some of the legal protections given to social media platforms, targeting Twitter in particular, after they labelled one of his tweets as potentially misleading.


demonstrations, free speech, free thought, human rights

So, what about freedom of speech?


This question is often asked when stricter rules about fake news are introduced by tech giants. The thing is, we must question who the people pushing misinformation are. There is a big difference between this and debating ideas and freely expressing oneself in an in-person debate among friends and acquaintances. In these instances, the audience is often able to offer counterarguments and disagree, leading to conversations around the topic at hand.

This is completely different from targeted advertising and stories intending to misinform readers with the goal of ultimately influencing their future voting habits. The creators of such content are doing so knowingly, and therefore are able to use techniques designed to influence opinions.


We should ask ourselves if intentional misinformation really counts as freedom of speech or whether it is actually closer to propaganda. If this is the case, controlling the spread of false messaging that can easily reach global audiences is probably for the best.



How can you avoid misinformation? Take a look at our guide on Fake News.

319 views0 comments

Related Posts

See All
bottom of page