Skip to content

Black artists say AI is biased, with algorithms erasing their history

    The artist Stephanie Dinkins has long been a pioneer in combining art and technology in her Brooklyn-based practice. In May, she received $100,000 from the Guggenheim Museum for her groundbreaking innovations, including an ongoing series of interviews with Bina48, a humanoid robot.

    For the past seven years, she has been experimenting with AI’s ability to realistically depict Black women laughing and crying, using various word prompts. The initial results were bland, if not alarming: Her algorithm produced a pink-shaded humanoid cloaked in a black cloak.

    “I was expecting something with a little more semblance of black woman,” she said. And while the technology has improved since her first experiments, Dinkins found herself using wordy terms in the text prompts to help the AI ​​image generators achieve her desired image, “to give the machine a chance to give me what I wanted.” But whether she uses the term “African American woman” or “Black woman,” machine distortions that mutilate facial features and hair textures are prevalent.

    “Improvements obscure some of the deeper questions we should be asking about discrimination,” Dinkins said. The artist, who is Black, added: “The prejudices are deeply embedded in these systems, so it becomes ingrained and automatic. If I work in a system that uses algorithmic ecosystems, I want that system to know who black people are in nuanced ways so that we can feel more supported.”

    She’s not alone in asking tough questions about the disturbing relationship between AI and race. Many black artists find evidence of racial bias in artificial intelligence, both in the large datasets that teach machines how to generate images, and in the underlying programs that run the algorithms. In some cases, AI technologies appear to ignore or distort performers’ text prompts, impacting how black people are portrayed in images, and in other cases, they seem to stereotype or censor black history and culture.

    The discussion of racial bias within artificial intelligence has exploded in recent years, with studies showing that facial recognition technologies and digital assistants struggle to identify the images and speech patterns of non-white people. The studies raised broader questions about fairness and bias.

    Major companies behind AI image generators, including OpenAI, Stability AI, and Midjourney, have committed to improving their tools. “Bias is an important, industry-wide issue,” Alex Beck, a spokeswoman for OpenAI, said in an email interview, adding that the company is constantly trying to “improve performance, reduce bias, and reduce malicious output.” She declined to say how many employees were concerned with racial bias, or how much money the company had allocated to the problem.

    “Black people are used to being unseen,” Senegalese artist Linda Dounia Rebeiz wrote in an introduction to her “In/Visible” exhibition for Feral File, an NFT marketplace. “When we are seen, we are used to being misrepresented.”

    To prove her point during an interview with a reporter, 28-year-old Rebeiz asked OpenAI’s image generator, DALL-E 2, to imagine buildings in her hometown of Dakar. The algorithm produced barren desert landscapes and ruined buildings that, according to Rebeiz, looked nothing like the houses on the coast in the Senegalese capital.

    “It’s demoralizing,” Rebeiz said. “The algorithm leans towards a cultural image of Africa that the West has created. By default, it is based on the worst stereotypes that already exist on the internet.”

    Last year, OpenAI said it was developing new techniques to diversify the images produced by DALL-E 2 so that the tool “generates images of people that more accurately reflect the diversity of the world’s population.”

    Minne Atairu, an artist featured in Rebeiz’s exhibit, is a Ph.D. candidate at Columbia University’s Teachers College who planned to use image generators with young students of color in the South Bronx. But she now worries “that students may generate offensive images,” Atairu explains.

    The Feral File exhibit features images from her “Blonde Braids Studies,” which explore the limitations of Midjourney’s algorithm for producing images of black women with naturally blonde hair. When the artist requested an image of a black identical twin with blond hair, the program produced a lighter-skinned sibling instead.

    “That tells us where the algorithm is getting images from,” Atairu said. “It doesn’t necessarily draw from a corpus of black people, but one aimed at white people.”

    She said she feared that young black children would try to make images of themselves and see children whose skin had lightened. Atairu recalled some of her previous experiments with Midjourney before recent updates improved its capabilities. “It would generate images that looked like blackface,” she said. “You’d see a nose, but it wasn’t a human nose. It looked like a dog’s nose.”

    In response to a request for comment, David Holz, Midjourney’s founder, said in an email: “If anyone finds a problem with our systems, we ask them to send us specific samples so we can investigate.”

    Stability AI, which provides image generator services, said it planned to work with the AI ​​industry to improve bias evaluation techniques with a greater diversity of countries and cultures. Bias, the AI ​​company said, is caused by “overrepresentation” in its overall datasets, though it did not specify whether white overrepresentation was the issue here.

    Earlier this month, Bloomberg analyzed more than 5,000 images generated by Stability AI and found that the program reinforced race and gender stereotypes, with lighter-skinned people typically portrayed as high-paying jobs, while darker-skinned subjects were labeled “dishwashers.” . ‘ and ‘housekeeper’.

    These problems have not stopped the wave of investment in the technology industry. A recent rosy report from the consulting firm McKinsey predicted that generative AI would add $4.4 trillion to the global economy annually. Last year, nearly 3,200 startups received $52.1 billion in funding, according to the GlobalData Deals Database.

    Technology companies have grappled with allegations of bias in dark-skinned images from the early days of color photography in the 1950s, when companies like Kodak used white models in their color development. Eight years ago, Google disabled its AI program’s ability to let people search for gorillas and monkeys through the Photos app because its algorithm incorrectly sorted black people into those categories. As of May this year, the issue was still not resolved. Two former employees who worked on the technology told The New York Times that Google had not trained its AI system with enough images of black people.

    Other experts who study artificial intelligence said bias runs deeper than data sets, citing the early development of this technology in the 1960s.

    “The issue is more complicated than data bias,” said James E. Dobson, a cultural historian at Dartmouth College and author of a recent book on the birth of computer vision. According to his research, there was very little discussion of race during the early days of machine learning, and most of the scientists who worked on the technology were white males.

    “It’s hard to separate today’s algorithms from that history because engineers build on those earlier versions,” Dobson said.

    To lessen the appearance of racial bias and hateful images, some companies have banned certain words from text prompts that users submit to generators, such as “slave” and “fascist.”

    But Dobson said companies hoping for a simple solution, such as censoring the kind of prompts users can submit, avoided the more fundamental issues of bias in the underlying technology.

    “It is a worrying time as these algorithms become more complex. And when you see waste coming out, you have to wonder what kind of waste process is still in the model,” the professor added.

    Auriea Harvey, an artist who was part of the Whitney Museum’s recent “Refiguring” exhibition on digital identities, encountered these bans for a recent project with Midjourney. “I wanted to query the database about what it knew about slave ships,” she said. “I got a message that Midjourney would suspend my account if I continued.”

    Dinkins ran into similar problems with NFTs she made and sold and shows how okra was brought to North America by enslaved people and settlers. She was censored when she tried to use a generative program, Replicate, to take pictures of slave ships. Eventually, she learned to outsmart the censors by using the term “pirate ship.” The image she received was an approximation of what she wanted, but it also raised troubling questions for the artist.

    “What does this technology do to history?” Dinkins asked. “You see someone trying to correct for bias, but at the same time that erases a piece of history. I think those strike-outs are just as dangerous as any kind of bias, because we just start forgetting how we got here.

    Guggenheim Museum Chief Curator Naomi Beckwith credited Dinkins’ nuanced approach to issues of representation and technology as one of the reasons the artist received the museum’s first Art & Technology award.

    “Stephanie has become part of a tradition of artists and cultural workers poking holes in these overarching and totalizing theories of how things work,” Beckwith said. The curator added that her own initial paranoia about AI programs replacing human creativity greatly diminished when she realized that these algorithms knew virtually nothing about black culture.

    But Dinkins isn’t quite ready to give up technology just yet. She continues to use it for her artistic projects – with skepticism. “If the system can generate a truly faithful image of a black woman crying or laughing, can we rest?”