Skip to content

The Other Side of the Story: Artificial Intelligence as an Ally Against Prejudice

Matilde Tassinari is a postdoctoral researcher in Social Psychology at the University of Helsinki. Her work examines intergroup contact, collective action, and social inequality through a social identity lens. She specializes in applying new technologies such as immersive media and artificial intelligence to advance experimental research on intergroup relations.
This article is part of the myths, mysteries, and misconceptions theme.

edited by mimmu and sophie. illustrated by sophie and vicky. Should you have any comments, please let us know!

Machines are here to flatten creativity, replace workers, and smuggle bias into every corner of life – this is the mood that swept in with the rise of generative AI. Some call it “AI hysteria”, but I think we should call it a moral rush; a wave of fear and frustration that arrived alongside dazzling new AI tools. A good degree of it is deserved, with AI being used irresponsibly by amplifying spam and scams, hoovering up creative work without fair credit, and producing deepfakes that target real people.

These concerns aren’t baseless, as technology can and does go wrong, sometimes disastrously, and especially when not properly handled. But when we claim that AI feeds bias, we risk overlooking something important: AI can reflect biased patterns, yes (especially when set to mirror biased human behaviour), but it can also expose bias, measure it, and reduce it when we design and govern systems with care.  

Is AI biased by nature?

AI can reproduce existing social biases, and in some cases, it even seems to develop its own patterned distortions, which is often referred to as “AI bias”. Put simply, AI bias happens when an AI system learns patterns from data, leading it to make unfair or incorrect decisions, especially against certain groups of people.

AI learns from examples, so any skew in those examples can become a skew in the output. For instance, if historically, men were favoured for engineering roles, an AI model trained with this data can learn to rate men as “better fits” for open positions, simply because women with identical qualifications were historically overlooked. Similarly, if face-recognition data underrepresents darker skin tones, accuracy can drop for those faces. This results in more mistakes that disproportionately affect certain groups. 

This means that AI isn’t biased by nature, but it tends to reflect the ways we choose to use it. When we train an algorithm on skewed data, or deploy it to serve narrow or prejudiced goals, we shouldn’t be surprised if its outputs reproduce those same distortions. In other words, the bias is less a property of the machine itself than of the choices and values we build into it. But what if humans actually learn from the AI’s bias and keep reproducing it by themselves?

AI can amplify human bias

A powerful study1 explored this question using a simple simulated medical task, where volunteers were asked to play the role of doctors diagnosing a disease with the help of AI. This AI was usually correct, but it had a hidden flaw: for one specific symptom, the AI gave consistently wrong recommendations. In other words, the system had a built-in bias.      

Many people simply followed the AI’s advice, even when it went against what they had been taught. The striking part came when they stopped receiving suggestions from the AI, and everyone had to decide on their own. Participants who had previously worked with the biased AI continued to misclassify the same symptoms more often than those who had never seen its advice. It was as if the AI’s pattern of errors had been absorbed into their own thinking.

This phenomenon goes beyond a simple over-reliance on automation – instead, the AI seems to function like a teacher, with people internalising its bias as if it were a useful shortcut. Once learned, this shortcut can be hard to unlearn and can spread to new situations. The message is clear: when we design and deploy AI systems, we have to think not only about how they behave, but also about how they influence us.

Can AI challenge stereotypes?

So yes, AI systems can teach and even amplify human bias, but that doesn’t make AI uniquely or inevitably biased. It’s worth asking whether we can also use these tools to help challenge and reduce prejudices. Social psychologists explored this possibility and suggested that instead of letting AI copy our stereotypes, we can deliberately make it go against them. Humans could rely on “counter-stereotypical AI”.

Many of the AIs we use today feel surprisingly social: we give them names, they speak with human-like voices, they may have avatars or photos, accents, or roles that represent gender, ethnicity or age. Because of this, we often treat them a bit like people. A study2 argues that these systems effectively become stand-ins, or “proxy members”, of human groups. Talking to an AI tutor representing a Black person or a female-voiced navigation system is not the same as meeting and interacting with humans, but it is still a form of contact that can shape our expectations and feelings. This is where counter-stereotypical design comes into play.

Social psychology has shown that our prejudices are built through months or years of repeated exposure to the same patterns: women may be thought of as caring but not competent, men as strong but not emotional, some ethnic groups as unqualified rather than professional, and so on. Over time, our minds expect these associations, but when we meet someone who breaks the pattern, it forces us to think again. Counter-stereotypical AI does exactly that. Imagine a highly competent female-voiced AI that leads a complex technical discussion instead of apologising and serving reminders, or a warm, emotionally attuned male caregiving bot that supports you when you are stressed. These AIs break common stereotypes about competence, warmth, and gender roles. At first, we might feel surprised when we can’t rely on our old stereotype shortcuts, but eventually our mental pictures will adjust.  Perhaps “woman” naturally fits with “highly competent engineer”, and “man” fits with “gentle and caring”. The more these interactions happen, the more these new, richer associations begin to feel normal. Social psychologists argue that this way, counter-stereotypical AI can slowly weaken and reshape stereotypes at a cognitive level.

AI as a mediator for meaningful encounters

Another application of AI to bias reduction is to use it as a way to make meaningful encounters between people from different social groups happen3. For decades, research has shown that prejudice can decrease when people from different social groups interact under the right conditions, such as feeling equal, cooperating, and sharing goals. These encounters help replace stereotypes with a more positive and human view of the “other”.

However, in many real-world settings, such interactions are rare: people from the same neighbourhood and backgrounds go to the same schools, hobbies, and workplaces. There are simply very few opportunities to meet people from other groups. To overcome these barriers, researchers have developed indirect forms of contact, such as reading stories about friendships between groups or observing positive interactions in the media.

Could virtual reality help fight prejudice? Find out here.

AI opens a new door for putting these ideas into practice: lifelike conversational agents, so-called “digital humans”, can simulate friendly and cooperative interactions with people in safe and structured ways. People can talk with these agents on their own phones or computers, at their own pace, without the pressure that sometimes comes with face-to-face contact. This closeness can make a difference. When someone feels they know and trust a member of another group, even a virtual one, it can soften attitudes toward the wider group that character represents. Rather than replacing real-life encounters, AI could help prepare the ground for them, making future interactions between groups less anxious, more curious, and more humane.

AI-generated images as a research tool

Although these tools are not widely used in real life yet, AI is already used to support research on reducing prejudice. In our recent study4, we used AI to create realistic human faces to better understand weight stigma, or the bias many people hold about individuals with higher body weight. These attitudes can harm people’s health, self-esteem, and even the care they receive from professionals. To study these biases, researchers often focus on implicit attitudes – the fast, automatic reactions that we may not notice but that still influence our behaviour – which are often studied in relation to pictures of stigmatised group members.

Instead of relying on photos of real people, which are often extremely hard to get, we used generative AI to create realistic faces of individuals who appeared to have obesity. Participants rated the faces as highly realistic, which shows that AI-generated faces can be convincing enough to use in psychological research. We then asked them to evaluate the AI-generated photos, and their ratings clearly reflected weight stigma: the higher the perceived body weight of the person in the picture, the lower the ratings of attractiveness and competence.

With this research, we showed that by generating diverse, realistic faces in a controlled way, AI gives researchers the tools to detect hidden biases, test interventions to reduce prejudice, and design fairer environments. 

Looking ahead, current research shows how AI can do more than mirror our prejudices; it can actively help us study and reduce them by generating tools for bias research, representing counter-stereotypical minority members, and making interactions across group lines more accessible. AI can become an ally and help build less judgmental societies, where people are seen for who they are as individuals.

References

  1. Hermann, E., De Freitas, J., & Puntoni, S. (2024). Reducing prejudice with counter-stereotypical AI. Consumer Psychology Review, 1–12. https://doi.org/10.1002/arcp.1102  
  2. Manfredi, A., Puzzella, G., Landi, D., Iacono, I., Michilli, J., & Gabbiadini, G. (2025). AI-driven digital humans for E-contact: A pre-registered study on reducing intergroup bias with generative artificial intelligence. Acta Psychologica, 258, 105129. https://doi.org/10.1016/j.actpsy.2025.105129
  3. Tassinari, M. (2025). Validating AI-Generated Stimuli for Assessing Implicit Weight Bias. In: Brooks, A.L., Banakou, D., Ceperkovic, S. (eds) ArtsIT, Interactivity and Game Creation. ArtsIT 2024. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 650. Springer, Cham. https://doi.org/10.1007/978-3-031-97254-6_14 
  4. Vicente, L., & Matute, H. (2023). Humans inherit artificial intelligence biases. Scientific reports, 13(1), 15737. https://doi.org/10.1038/s41598-023-42384-8