COMMENTARY

Barbie and the dark side of generative artificial intelligence

As Barbie-mania grips the world the peppy cultural icon deserves thanks for helping to illustrate a dark side of AI

Published August 17, 2023 5:30AM (EDT)

Barbie & Artificial Intelligence (Photo illustration by Salon/Getty Images)
Barbie & Artificial Intelligence (Photo illustration by Salon/Getty Images)

Many are excited about the ways in which AI might enhance our lives. Any development that gives people the ability to express themselves in a new way is a thrilling one, and the speed at which AI tools are being funded, created, and deployed has propelled these systems into the mainstream almost before we could blink an eye.

But many also fear potential harms of this tech going mainstream, including how it is used to exacerbate existing inequalities that harm entire communities — like when Buzzfeed recently published images of 194 AI-generated Barbie dolls, many perpetuating inaccurate and hurtful cultural stereotypes that were not caught before publication. 

This bias does not occur in a vacuum. The nature of outputs like these AI Barbie images depend on the system's training data, its training model and the choices its human creators make across the process. The old adage, "garbage in, garbage out" still holds true. AI systems need to be fed huge amounts of data to work, and will "learn" the biases most prevalent in that data. This data, built on scraping the open web, will tend to parrot the most dominant voices, often at the expense of minority and marginalized identities.

The old adage, "garbage in, garbage out" still holds true. AI systems need to be fed huge amounts of data to work, and will "learn" the biases most prevalent in that data.

The AI generated Barbies are a very visual and visceral example of why AI should not be used to determine more impactful decisions in our lives like housing, employment, medical care or schooling — a vibrant illustration depicting how these tools emerge with societal biases baked in.

Furthermore, in a world where wealthy countries are building walls around access to and development of AI systems, people in countries like Kenya are providing essential yet poorly-paid labor in building these systems. Despite being built on these people's labor, the resulting AI only furthers their exploitation and disempowerment.

For example, an AI tool may generate images of white people as its default, reinforcing racial inequality or orient towards lighter skin in response to requests for "beautiful" people. Images of women are more likely to be coded as sexual in nature than images of men in similar states of dress and activity, due to widespread cultural objectification of women in these images and their accompanying text. All of these biases were evident in the AI Barbie dolls: the oversexualisation of dolls representing the Caribbean, ideations of war for certain countries in the Middle East, inaccurate cultural clothing from Asia and a global erasure of indigenous communities.

Things turn particularly dire when this veiled logic is used to determine life-altering decisions. Law enforcement, medical care, schools and workplaces are all turning to the black box of AI to make decisions, such as with predictive policing tools, which are built on a foundation of notoriously inaccurate and biased crime data. These tools launder biased police data into biased policing — with algorithms putting an emphasis on already over-policed neighborhoods. This bias laundering reappears in determining cash bail, job hiring, housing or even welfare benefits and asylum. Even without questioning the objectivity of AI, this bias can launder discriminatory and pseudoscientific claims — like predicting if someone is a criminal based on the shape of their face.

Law enforcement, medical care, schools and workplaces are all turning to the black box of AI to make decisions, such as with predictive policing tools, which are built on a foundation of notoriously inaccurate and biased crime data.

So what happens if — and more likely when — these decisions are "wrong"? How can a victim prove it, and who is legally and ethically responsible? These are questions yet to be answered.

AI without human input can't be trusted, and this is particularly important considering that it's often communities most impacted by harmful bias that have the least access to the development and oversight of these AI systems. Consider this debacle within the context of the current Writers Guild of America and SAG-AFRA strikes: Automated systems produce outputs that are generally low-quality — and without human contributors such as editors and mindful writers, we will see inaccurate, offensive and harmful AI creations creep into our media landscape.

It's often communities most impacted by harmful bias that have the least access to the development and oversight of these AI systems.

The solution is to follow the example of security research and open science. Developing these new tools in an open and auditable way allows for more knowledge sharing, consensual and transparent data collection. It would help researchers across contexts address these biases and provide an effective system of AI that can help writers, rather than functioning as a poor, low-quality replacement for writers.

This would also make it possible for the would-be subjects to create their own systems and change the depiction of their images and communities in them — ultimately breaking down oppressive power imbalances. AI can be done correctly and equitably, but this must include the erosion of exploitative data harvesting to make sure everyone benefits.

We need to reclaim this data and harness its power to build tools for a better world — perhaps starting with better AI-generated Barbie dolls.

Paige Collings is Senior Speech and Privacy Activist and Rory Mir is Associate Director of Community Organizing at the Electronic Frontier Foundation, a nonprofit digital civil liberties organization headquartered in San Francisco.


By Paige Collings

Paige Collings is Senior Speech and Privacy Activist at the Electronic Frontier Foundation.

MORE FROM Paige Collings

By Rory Mir

Rory Mir is Associate Director of Community Organizing at the Electronic Frontier Foundation.

MORE FROM Rory Mir


Related Topics ------------------------------------------

Ai Artificial Intelligence Barbie Buzzfeed Commentary Generative Ai Oppenheimer Racism