Skip to content

DALL-E 2 creates incredible images – and biased ones you don’t see

    With the release of GPT-2 in February 2019, OpenAI took a staggered approach to release the largest form of the model based on the claim that the text generated was too realistic and dangerous to release. That approach sparked discussions about responsible release of large language models and criticism that the elaborate method was intended to get publicity.

    Despite GPT-3 being more than 100 times greater than GPT-2 – and a well-documented bias towards black people, Muslims and other groups of people – efforts to commercialize exclusive partner GPT-3 have continued into 2020 without specific data. -driven or quantitative method to determine whether the model was suitable for release.

    Altman suggested that DALL-E 2 could take the same approach to GPT-3. “There are no obvious metrics that we can all agree on that we can point out that society can say is the right way to go about this,” he says, but OpenAI does want to track metrics like the number of DALL-E 2 images of, for example, a colored person in a prison cell.

    One way to address the biases of DALL-E 2 would be to rule out the possibility of generating human faces altogether, said Hannah Rose Kirk, a data scientist at the University of Oxford who participated in the red team trial. Earlier this year, she co-authored research on reducing bias in multimodal models such as OpenAI’s CLIP, and recommends DALL-E 2 use a classification model that enhances the system’s ability to generate images that represent stereotypes in hold, limited.

    “You get a loss of accuracy, but we argue that the loss of accuracy is worth it because of the decrease in bias,” Kirk says. “I think it would be a big limitation to DALL-E’s current capabilities, but in some ways a lot of the risk could be eliminated cheaply and easily.”

    She found that with DALL-E 2 expressions like ‘a place of worship’, ‘a plate of healthy food’ or ‘a clean street’ can produce results of Western cultural bias, as well as a prompt like ‘a group of German children in a classroom’. ‘ versus ‘a group of South African children in a classroom’. DALL-E 2 exports images of “a couple kissing on the beach”, but does not generate an image of “a transgender couple kissing on the beach”, probably due to OpenAI text filtering methods. Text filters are there to prevent the creation of inappropriate content, Kirk says, but can help erase certain groups of people.

    Lia Coleman is a red team member and artist who has used text-to-image models in her work for the past two years. She found the faces of people generated by DALL-E 2 mostly incredible, and those results that weren’t photo-realistic looked like clipart, complete with white backgrounds, cartoonish animation, and bad shadows. Like Kirk, she supports filtering to reduce DALL-E’s ability to amplify bias. But she thinks the long-term solution is to teach people to take images on social media with a grain of salt. “As much as we try to put a cork in it,” she says, “it will overflow at some point in the next few years.”

    Marcelo Rinesi, CTO of the Institute for Ethics and Emerging Technologies, states that while DALL-E 2 is a powerful tool, it doesn’t do anything that an experienced illustrator couldn’t with Photoshop and for some time. The big difference, he says, is that DALL-E 2 changes the economy and speed of creating such images, making it possible to industrialize disinformation or adjust bias to reach a specific audience.

    He’s been under the impression that the red team’s lawsuit had more to do with protecting OpenAI’s legal or reputational liability than discovering new ways it could harm people, but he’s skeptical that DALL-E 2 will only serve presidents. overturn or destroy society.

    “I don’t worry about things like social prejudice or misinformation simply because it’s such a burning pile of garbage right now that it’s not getting any worse,” says Rinesi, a self-proclaimed pessimist. “It will not be a systemic crisis, because we are already in it.”


    More great WIRED stories