Skip to content

We need a new right to repair for artificial intelligence

    There is a growing trend of people and organizations rejecting the unsolicited imposition of AI into their lives. In December 2023, The New York Times sued OpenAI and Microsoft for copyright infringement. In March 2024, three authors in California filed a class action against Nvidia for allegedly training their AI platform NeMo on their copyrighted work. Two months later, actress Scarlett Johansson sent a legal letter to OpenAI when she realized the new ChatGPT voice was “eerily similar to hers.”

    The technology is not the problem here. The power dynamics are. People understand that this technology is built on their data, often without our consent. It's no wonder that public trust in AI is declining. A recent Pew Research survey found that more than half of Americans are more concerned than excited about AI, a sentiment echoed by a majority of people from Central and South American, African and Middle Eastern countries in a World Risk Poll.

    By 2025, we will see people demanding more control over the way AI is used. How will that be achieved? An example is red teaming, a practice adopted from the military and used in cybersecurity. In a red teaming exercise, external experts are asked to 'infiltrate' or break a system. It acts as a test of where your defenses might go wrong so you can fix them.

    Red teaming is used by major AI companies to find problems in their models, but has not yet become widespread as a practice for public use. That will change in 2025.

    For example, law firm DLA Piper now uses red teaming with lawyers to directly test whether AI systems comply with legal frameworks. My nonprofit, Humane Intelligence, organizes red teaming exercises with non-technical experts, governments, and civil society organizations to test AI for discrimination and bias. In 2023, we conducted a 2,200-person red teaming exercise that was supported by the White House. In 2025, our red teaming events will build on the experiences of everyday people to evaluate AI models for Islamophobia, and for their ability to enable online harassment of women.

    Overwhelmingly, when I host one of these exercises, the most common question I am asked is how we can move from identifying problems to solving problems on our own. In other words: people want a right to repair.

    An AI right to repair could look like this: A user could have the ability to run diagnostics on an AI, report any anomalies, and see when they are fixed by the company. Third-party groups, such as ethical hackers, can create patches or fixes for problems that anyone can access. Or you can hire an independent, accredited party to evaluate an AI system and customize it for you.

    While this is still an abstract idea today, we are paving the way for a right to repair to become a reality in the future. Overturning the current dangerous power dynamics will take some work. We are quickly being forced to normalize a world where AI companies simply insert new and untested AI models into real systems, with regular people as collateral damage. The right to repair gives everyone the power to determine how AI is used in their lives. 2024 was the year the world woke up to the ubiquity and impact of AI. 2025 is the year we demand our rights.