top of page
  • Alex Leggett

Falling Down the Rabbit Hole: The Dark Side of YouTube's Algorithms

Updated: Dec 21, 2021

We’ve all been there… It’s 2AM and once again you’ve found yourself in a cycle of watching countless random YouTube videos rather than getting that well-needed beauty sleep.


Whilst this may seem relatively harmless, excluding the need for a bit more coffee in the morning, it is in fact a trend which reflects a dangerous feature of YouTube’s business model. Namely, this is the problem of YouTube’s ‘rabbit holes’ which have too often led users on a deadly path toward radicalisation and extremism.


What does the YouTube algorithm have to do with conspiracy theories?

So, what are Rabbit Holes?


Similar to the one Alice falls down in Lewis Carroll’s legendary novel Alice in Wonderland, an internet rabbit hole can be thought of as a journey or process through which the user is encouraged to consume additional content based on what they’ve already watched or read. This can lead internet users to consuming content which reinforces their already-held beliefs or, more worryingly, push them toward increasingly extreme content.


These rabbit holes are most clearly found on YouTube where users are recommended videos based on their previous viewing history. Ever watched a YouTube video and noticed the sidebar of new video thumbnails? These videos are recommended to us because they are what YouTube believes we want to see. Thanks to YouTube’s “autoplay” feature, these videos are spoon-fed to us without even needing to click on them.



Why should we care about all this?


Rabbit holes may seem relatively harmless; after all, social media platforms argue that they are only trying to show us content they think we would enjoy. However, the danger lies in the content being recommended to billions of YouTube users.


YouTube has greatly evolved as a platform over the past few years. It is no longer merely the website people visit to watch cat videos or viral sensations such as ‘Gangnam Style’. Today, YouTube is a multimedia powerhouse hosting an endless stream of content. At least two billion users log in to YouTube each month to watch the 500 hours of content which is uploaded to the platform every minute.


These statistics show one thing very clearly: YouTube has become a media behemoth with an almost inconceivable number of videos. Perhaps it is this vast array of content which has attracted such a large and loyal group of users among the younger generation. Indeed, it is young people who are the most at risk of YouTube’s rabbit holes since they are increasingly relying on the platform for news and information about the world. According to Ofcom , nearly one third of Brits aged 16-24 use YouTube to find news, while it is the second most popular news source for those aged 12-15.


"nearly one third of Brits aged 16-24 use YouTube to find news"

When users become reliant on YouTube for information, often spending hours each day on the platform, they become more susceptible to their own worldview becoming distorted by the videos they watch. After all, our perspectives of the world are all impacted by the media we consume. This becomes a problem, however, once a user falls into one of YouTube’s rabbit holes.


Since YouTube recommends videos based on what it believes we would like to watch, it often pushes us toward videos which reinforce our pre-existing beliefs. This phenomenon is not unique to YouTube; it is common to almost all social media platforms. Our increasing reliance on social media for news creates so-called ‘echo chambers’ which provide us with opinions and online experiences supporting our own views, thereby creating a one-sided view to every story. This deprives us of opposing views and fosters an increasingly polarised political atmosphere.


More worrying, however, is YouTube’s trend of pushing users toward extreme content. There are countless examples of YouTube users being recommended baseless conspiracy theories and hate-filled content after having watched perfectly normal videos. After watching a video about Vikings, one user recalls having seen content about white supremacy in his recommended section. YouTube has become a platform of radicalisation and is now viewed as a key training ground for the far right and other extremist groups to attract more followers through online indoctrination.


YouTube regrets

But this is all just on the internet, right?


Unfortunately, the supposed distinction between the online and offline worlds is a relic of the past. By spending countless hours online, the digital world becomes our principal filter on reality. This is especially true during the current pandemic which has confined us to our homes and computer screens.


Extremist views adopted through online radicalisation all too often bring deadly consequences. In March 2019, 51 people were killed when a lone gunman opened fire on worshippers at two mosques in Christchurch, New Zealand. This was striking example of a terrorist attack inspired by and undertaken for online media. The terrorist had attached a camera onto his gun, live-streaming for 17 minutes as he carried out the attack, while also publishing an online manifesto expressing white supremacist views.


The official report into the attack concluded that the attacker’s YouTube usage “may have contributed to” the tragic events. The terrorist had subscribed to right-wing YouTube channels and had regularly viewed content that voiced extreme-right and white supremacist views. Notably, investigations into the terrorist’s online activity demonstrate that he “spent much time accessing broadly similar material on YouTube”. This corresponds to a YouTube rabbit hole in which users are constantly bombarded by videos promoting a particular agenda. Whilst the Christchurch terrorist had held extremist beliefs since an early age, his YouTube viewing habits served to reinforce and strengthen these views; eventually leading him to commit the deadliest terror attack in New Zealand’s history.


Flowers being laid in tribute after the attack

Is YouTube purposefully promoting extreme content?


Well, yes and no. Obviously, YouTube itself is not an extremist organisation specifically wishing to radicalise users. However, YouTube’s business model does effectively lead it toward doing this very thing.


As a primarily free-to-access platform, YouTube’s business model primarily relies on advertising revenue. In 2019, Alphabet – YouTube (and Google)’s parent company – earned over $15 billion from advertisements. YouTube’s offer to businesses considering advertising on the platform is two-fold: firstly, a large viewer base which watches content regularly; secondly, data analytics allowing advertisers to target specific audiences. This advertisement-driven business model requires YouTube to ensure that its users continue viewing content on the platform, and so the aim of its algorithms is to attract and retain user attention.


According to one former YouTube engineer, the principal objective of the platform’s AI-based recommendation algorithm is to “maximize watch time at all cost” by ensuring more people watch more videos for longer. One problem with such AI algorithms is that very few people understand how they make decisions, especially in the case of social media platforms keen to keep their inner workings secret.


Nevertheless, with the countless users who have been recommended increasingly extreme content as they’ve fallen deeper down YouTube’s rabbit holes, it’s not hard to come to the conclusion that extreme content is built deep within YouTube’s algorithms as a way to keep users on the platform.



What is being done to tackle rabbit holes?


The Christchurch attack in New Zealand prompted calls for tighter regulations to remove extremist content from social media. Weeks later, the so-called Christchurch Call brought together a group of prominent political, social and business figures, led by French President Emmanuel Macron and New Zealand Prime Minister Jacinda Ardern, to eliminate online extreme content. Since then, the EU has advanced potential legislation which would require online platforms to play a more active role in detecting and removing terrorist content.


However, this shows that action being taken by regulators has focused on removing extreme content. Much less attention has been directed toward the algorithms that drive users down rabbit holes toward such content. YouTube says that it has made efforts to remove content violating its community guidelines, claiming to have removed many hate videos from its platform. YouTube also claims to have tweaked its algorithms to reduce recommendations of ‘borderline’ and misleading content which may be harmful to users, while also promoting content from authoritative sources.


Nevertheless, campaigners say that more needs to be done and have called on YouTube to provide greater transparency to allow researchers and regulators to better understand how its algorithms recommend content. Whilst progress appears to have been made in tackling rabbit holes and harmful content on YouTube, the fact remains that the platform continues to promote content which is potentially misleading or harmful to its users, including far-right extremism and outlandish conspiracy theories.


Jacinda Ardern

YouTube in the Looking-Glass


YouTube is not the only platform to host misinformation and harmful content, yet its vast size and ability to keep users’ attention to a point of near-addiction means that we must take this issue seriously. Spending hours watching videos recommending by the YouTube algorithm may seem a harmless manifestation of laziness, but it can lead to something much darker. Too many tragic events have been inspired by extremist content online, therefore it is imperative that we watch out for YouTube’s rabbit holes while pressing the platform to do more to tackle this sinister phenomenon.



For more information on this topic, see our section on Fake News & the Role of the Media.

3,251 views1 comment
bottom of page