How AI Bots Have Reinforced Gender Bias in Hate Speech

48 | 2023

Daniele Battista*, Jessica Camargo Molano**

* Department of Political and Social Studies, University of Salerno, Italy
https://orcid.org/0009-0005-8418-8374
** International Telematic University Uninettuno, Italy
https://orcid.org/0009-0007-9596-0950

The aim of this article is to examine the issue of hate speech in the digital society, with a particular emphasis on the topic of gender and misogynistic hate speech. In this context, it seeks to present concrete examples of biases observed within such systems, considering emblematic cases such as Amazon’s Artificial Intelligence (AI) recruitment tool and Microsoft’s Tay chatbot. The objective is to highlight how such biases have the potential not only to perpetuate gender-based discrimination but also to exacerbate inequalities. In light of these considerations, the article ultimately arrives at a fundamental conclusion: the crucial need for a multifaceted approach to address the problem of misogynistic hate speech and its manifestations against women. This approach entails, above all, a steadfast commitment to gender equality and the promotion of social justice within the digital environment.

Keywords

Artificial Intelligence, gender bias, hate speech, misogyny

DOI: https://doi.org/10.22355/exaequo.2023.48.05
[ Descarregar pdf ] [ Ver pdf ]

Direitos de autor: Creative Commons – CC BY NC

Acesso livre para quem lê e para quem escreve.

SCImago Journal & Country Rank