top of page
  • Kimia Ipakchi

Technology, AI and Discrimination Against Women

Updated: Dec 21, 2021

When talking about discrimination, we often associate it with prejudice, intolerance, and ignorance; things that can be described as very human attributes. When talking about technology and artificial intelligence (AI) however, logical programming and mathematical reasoning come into mind. We don’t usually think about technology and AI as having the ability to be biased and discriminate. These types of technology are almost portrayed as being ‘too advanced’ to have bias.


However, recent research in image-generating algorithms have called for a reset on how technology can reproduce human biases and fabricate it into their functioning. Results from the study challenge how the development of algorithms, technological devices and artificial intelligence have a programmed bias against women and contribute to sexual and gender discrimination online.


Recent research


A new research study done by Ryan Steed and Aylin Caliskan, a PhD student at Carnegie Mellon University and an assistant professor at George Washington University respectively, researches the possibility of bias on image-generating algorithms. Researchers fed the algorithms separate pictures of a man and woman, both cropped below their necks, and monitored their autocompletion.


According to the study, 43% of the time the image of the man was autocompleted with the man in a suit. When the algorithm autocompleted the woman, a shocking 53% of the time the woman was illustrated in a bikini or low-cut top. The research received public attention when a picture of U.S. Representative Alexandria Ocasio-Cortezs head was autogenerated into her in a bikini. This was removed from the research paper and the issue of programmed sexual and gender discrimination was brought to question.



Are algorithms biased?


Algorithms which attach labels onto images are known as language-generation algorithms. They’re conditioned to the language of the internet. So, are they biased? Yes and no. Such algorithms are naturally impartial but are driven by the data which they consume: the internet.


From the darker corners of the internet to more mainstream spaces; these have all become fertile ground for the sexualisation of women, harmful stereotypes and general misinformation which all run rampant online.


The algorithms detect these patterns of extremism and prejudice and weave it into their data analysis. In other words, they must learn from something and that something is the sexist, racist, discriminatory discourse on the internet. This in turn means that the algorithms become extensions of the same sexist ideas that are found online.



What does this mean?


This has many implications. Such algorithms are often used for things like computer-vision applications, such as video-based interviews, as well as surveillance and facial recognition.


Algorithms and artificial intelligence do not promote gender discrimination alone. Algorithms are also influenced by racist discourses and create racial bias in machine-led decisions such as job applications, bail decisions and the distribution of healthcare.


Last year, algorithm bias reached headlines of main UK media outlets when an algorithm used to determine students’ GCSE and A-Level grades appeared to disproportionately downgrade disadvantaged students.


Discrimination against women and minorities in AI

Algorithms: too much power?


The way algorithms operate is not completely outside our power. After all, they are created and programmed by humans. But in a world of abundant and unfiltered information, it is hard to control the data which algorithms consume.


Most recently, there has been a shift from supervised to unsupervised learning for algorithms. Before, machines were given the exact ‘labels’ to look for patterns. However, unsupervised learning means that these machines do not need humans to label the images and they look for whatever patterns they can find. Therefore, the datasets these algorithms consume are not filtered and often contain disturbing language and ideas. As a consequence, there have been numerous calls for regulation and data transparency.



The feminisation of AI


The promotion of gender discrimination and increasing feminisation of artificial intelligence can also be seen in the use of everyday technology.


Voice-based services which act as virtual assistants, such as Cortana and Alexa, use the voice of a female. Interestingly, Siri uses a female voice in the US but male one in the UK. Jeremy Wagstaff, a technology consultant, told the Guardian that British people “mumble and obey authority”, demanding someone who sounds authoritative which apparently means male.


In addition, a study into the gender politics of robots by The Atlantic magazine concluded that technological devices which concern more admin tasks like guiding through public transport and sorting phone messages tend to have female voices. The article describes the importance of ‘likeability’ from a marketing perspective. People prefer a female voice to male ones for such menial tasks and ‘dislike hearing a man go about secretarial work’ because it is aligned to the patriarchal stereotypes the user is familiar with.


"Through AI, women remain obedient and symbols of consumption"

It’s not hard to see how the use of female voices mirrors sexist ideas of female subservience and power dynamics. Through AI, women remain obedient and symbols of consumption, who must react and adhere to masculine projection and demands. The feminisation of AI reveals the role of technological devices in reproducing the division of master-servant narratives, and further contributing to patriarchal structures.



Cyberfeminism


Cyberfeminism can be seen as a response to this. It is a call for women to reclaim technology and fight for gender inclusivity in the technological realm. Peaking in the late 1990s and early 2000s, the movement promoted critically analysing the relationship between technology and power relations and systemic oppression, and how it influences women’s participation online.


These movements have overspilled into contemporary art. An example of this is the use of ‘fembots’ as a symbol of claiming autonomy in the age of increasing femininisation of AI. Despite receiving criticism for its portrayal of very essentialist views of gender and further sexualisation of women, it has still entered mainstream pop-culture, with Charlie XCX singing in fembot in her album Pop 2 as an example.



Technology, gender and identity


The damaging effect of programmed gender discrimination in artificial intelligence is further emphasised when we consider the increasing role of technology and online-spaces in identity-shaping.


In the book ‘Glitch Feminism: A Manifesto’, author Legacy Russell explores the relationship between gender, technology and identity. Russell explains how technology and the virtual sphere allow us to go beyond the boundaries of our physical body and gendered stereotypes to navigate a more fluid identity.


Technology, therefore, can play an important role in the liberation of gender identity, which emphasises the need to combat discriminatory technologies



For more articles on this topic, head to our dedicated Gender Issues & Feminism section.


Edited by Olena Strzelbicka

387 views4 comments

Related Posts

See All
bottom of page