Skip to content

Elon Musk's criticism of 'Woke AI' suggests ChatGPT could be a target of the Trump administration

    Mittelsteadt adds that Trump could punish companies in several ways. For example, he cites the way the Trump administration canceled a major federal contract with Amazon Web Services, a decision likely influenced by the former president's view of the Washington Post and its owner, Jeff Bezos.

    It wouldn't be difficult for policymakers to point to evidence of political bias in AI models, even if it works both ways.

    A 2023 study by researchers from the University of Washington, Carnegie Mellon University and Xi'an Jiaotong University found a range of political preferences across several major language models. It also showed how this bias can affect the performance of hate speech or disinformation detection systems.

    Another study, conducted by researchers at the Hong Kong University of Science and Technology, found biases in several open source AI models on polarizing issues such as immigration, reproductive rights and climate change. Yejin Bang, a PhD student involved in the work, says that most models tend to be liberal and US-centric, but the same models can express a variety of liberal or conservative biases depending on the subject.

    AI models capture political biases because they are trained on large amounts of internet data that inevitably include a variety of perspectives. Most users may not be aware of any bias in the tools they use because models contain guardrails that prevent them from generating certain harmful or biased content. However, these biases can leak out subtly, and the additional training models receive to limit their output can introduce further bias. “Developers could ensure that models are exposed to multiple perspectives on divisive topics, allowing them to respond with a balanced point of view,” says Bang.

    The problem could get worse as AI systems become more ubiquitous, says Ashique KhudaBukhsh, a computer scientist at the Rochester Institute of Technology who has developed a tool called the Toxicity Rabbit Hole Framework, which exposes the various societal biases of large language models. “We fear that a vicious cycle will emerge as new generations of LLMs will be increasingly trained on data contaminated by AI-generated content,” he says.

    “I am convinced that those biases within LLMs are already a problem and will probably become even greater in the future,” says Luca Rettenberger, a postdoctoral researcher at the Karlsruhe Institute of Technology, who conducted an analysis of LLMs for biases related with German politics.

    Rettenberger suggests that political groups may also attempt to influence LLMs to promote their own views over those of others. “If someone is very ambitious and has bad intentions, it may be possible to manipulate LLMs in certain directions,” he says. “I see the manipulation of training data as a real danger.”

    There have already been some attempts to shift the balance of biases in AI models. Last March, a programmer developed a more right-wing chatbot in an attempt to highlight the subtle biases he saw in tools like ChatGPT. Musk himself has promised to make Grok, the AI ​​chatbot built by xAI, “maximally truth-seeking” and less biased than other AI tools, although in practice it also covers when it comes to tough political questions. (A staunch Trump supporter and immigration hawk, Muscus's own vision of “less biased” could also translate into more right-wing results.)

    Next week's elections in the United States are hardly likely to heal the discord between Democrats and Republicans, but if Trump wins, the talk of anti-woke AI could become much louder.

    Musk offered an apocalyptic take on the issue at this week's event, referencing an incident in which Google's Gemini said nuclear war would be preferable to misrepresenting Caitlyn Jenner. “If you have an AI programmed for that kind of thing, it might conclude that the best way to ensure that no one is mistreated is to destroy all humans, thus making the chance of future abuse zero,” he said.