Skip to content

Meet the people trying to protect us from AI

    A year ago, the idea of ​​having a meaningful conversation with a computer was science fiction. But since the launch of OpenAI’s ChatGPT last November, life has started to feel more like a techno thriller with a fast-moving plot. Chatbots and other generative AI tools are starting to revolutionize the way people live and work. But whether this plot turns out to be uplifting or dystopian depends on who helps write it.

    Fortunately, artificial intelligence evolves just like the cast of people who build and study it. This is a more diverse group of leaders, researchers, entrepreneurs and activists than those who laid the foundation of ChatGPT. While the AI ​​community remains predominantly male, some researchers and companies have been pushing in recent years to make it more welcoming to women and other underrepresented groups. And the field now includes many people engaged in more than just making algorithms or making money, thanks to a movement – led largely by women – that considers the technology’s ethical and societal implications. Here are some of the people shaping this accelerating storyline. — Will Knight

    About the art

    “I wanted to use generative AI to capture the potential and unease we feel as we explore our relationship with this new technology,” says artist Sam Cannon, who teamed up with four photographers to enhance portraits with AI-created backgrounds. “It felt like a conversation: I fed images and ideas to the AI, and the AI ​​offered its own ideas in return.”


    Rumman Chowdhury

    PHOTO: CHERIL SANCHEZ; AI art by Sam Cannon

    Rumman Chowdhury led Twitter’s ethical AI research until Elon Musk took over and fired her team. She is a co-founder of Humane Intelligence, a non-profit organization that uses crowdsourcing to expose vulnerabilities in AI systems and designs competitions that challenge hackers to induce bad behavior in algorithms. The first event, scheduled for this summer with support from the White House, will test generative AI systems from companies like Google and OpenAI. Chowdhury says large-scale, public testing is necessary because of the far-reaching implications of AI systems: “If the implications of this are going to have a big impact on society, aren’t the people in society the best experts?” —Khari Johnson


    Sarah VogelPhoto: Annie Marie Musselman; AI art by Sam Cannon

    Sarah Bird’s work at Microsoft is to prevent the generative AI the company is adding to its office apps and other products from going off the rails. While she’s seen text generators like the one behind the Bing chatbot become more capable and useful, she’s also seen them get better at spewing biased content and malicious code. Her team tries to contain that dark side of technology. AI could change many lives for the better, says Bird, but “none of that is possible if people are concerned about the technology producing stereotyped outputs.” —KJ


    Jejin ChoiPhoto: Annie Marie Musselman; AI art by Sam Cannon

    Yejin Choi, one professor in the School of Computer Science & Engineering at the University of Washington is developing an open source model called Delphi designed to have a sense of right and wrong. She is interested in how people perceive Delphi’s moral statements. Choi wants systems that are as capable as those from OpenAI and Google and that don’t require huge resources. “The current focus on the scale is very unhealthy for several reasons,” she says. “It’s a total concentration of power, just too expensive, and probably not the only way.” —World Cup


    Margaret MitchellPhoto: Annie Marie Musselman; AI art by Sam Cannon

    Margaret Mitchell founded Google’s Ethical AI research team in 2017. She was fired four years later after a dispute with executives over a paper she co-authored. It warned that large language models – the technology behind ChatGPT – could reinforce stereotypes and cause other ills. Mitchell is now chief of ethics at Hugging Face, a startup that develops open source AI software for programmers. She makes sure the company’s releases don’t bring unpleasant surprises and encourages the field to put humans above algorithms. Generative models can be useful, she says, but they can also undermine people’s sense of truth: “We risk losing touch with the facts of history.” —KJ


    Inioluwa Deborah RajiPhoto: AYSIA STIEB; AI art by Sam Cannon

    When Inioluwa Deborah Raji started out in AI, working on a project that discovered biases in facial analysis algorithms: they were least accurate in dark-skinned women. The findings prompted Amazon, IBM and Microsoft to stop selling facial recognition technology. Now Raji is working with the Mozilla Foundation on open source tools that help people examine AI systems for flaws such as bias and inaccuracy, including large language models. Raji says the tools could help communities harmed by AI challenge the claims of powerful tech companies. “People are actively denying harm is being done,” she says, “so gathering evidence is integral to any kind of progress in this area.” —KJ


    Daniela AmodeiPhoto: AYSIA STIEB; AI art by Sam Cannon

    Daniela Amodei earlier worked on AI policy at OpenAI and helped lay the groundwork for ChatGPT. But in 2021, she and several others left the company to found Anthropic, a public benefit company charting its own approach to AI safety. The startup’s chatbot, Claude, has a “constitution” that guides its behavior, based on principles drawn from sources such as the UN’s Universal Declaration of Human Rights. Amodei, the president and co-founder of Anthropic, says such ideas will reduce today’s misbehavior and perhaps help limit more powerful AI systems of the future: “Long-term thinking about the potential consequences of this technology could be very important are.” —World Cup


    Lil IbrahimPhoto: Ayesha Kazim; AI art by Sam Cannon

    Lila Ibrahim is chief operating officer at Google DeepMind, a research unit at the heart of Google’s generative AI projects. She sees running one of the world’s most powerful AI labs not so much as a job as as a moral calling. Ibrahim joined DeepMind five years ago, after nearly two decades at Intel, hoping to help AI evolve in ways that benefit society. One of her duties is to chair an internal review board that discusses how to broaden the benefits of DeepMind’s projects and avoid poor outcomes. “I thought if I could use some of my experience and expertise to bring this technology into the world in a more responsible way, it would be worth being here,” she says. — Morgan Meaker


    This article appears in the July/August 2023 issue. Subscribe now.

    Let us know what you think of this article. Send a letter to the editor at mail@CBNewz.