Skip to content

Meet ChatGPT’s right-wing alter ego

    Elon Musk causes There was a stir last week when he told (recently fired) right-wing provocateur Tucker Carlson that he was planning to build “TruthGPT,” a competitor to OpenAI’s ChatGPT. Musk says the incredibly popular bot shows “awakened” bias and that his version will be a “maximum truth-seeking AI” – suggesting that only his own political views reflect reality.

    Musk is far from alone in his concern about political bias in language models, but others are trying to use AI to bridge political divisions rather than push particular points of view.

    David Rozado, a New Zealand data scientist, was one of the first to draw attention to the problem of political bias in ChatGPT. Several weeks ago, after documenting what he considered to be liberal responses from the bot on issues such as taxation, gun control, and free markets, he created an AI model called RightWingGPT that expresses more conservative viewpoints. It loves gun ownership and not a fan of taxes.

    Rozado took a language model called Davinci GPT-3, similar but less powerful than the one powering ChatGPT, and refined it with additional text, at a cost of a few hundred dollars spent on cloud computing. Whatever you think of the project, it shows how easy it will be for people to incorporate different perspectives into language models in the future.

    Rozado tells me he also plans to build a more liberal language model called LeftWingGPT, as well as one called DepolarizingGPT, which he says will demonstrate a “depolarizing political position.” Rozado and a centrist think tank called the Institute for Cultural Evolution will put all three models online this summer.

    “We train each of these parties — right, left, and ‘integrative’ — by using the books of thoughtful authors (not provocateurs),” Rozado says in an email. Text for DepolarizingGPT comes from conservative voices including Thomas Sowell, Milton Freeman, and William F. Buckley, as well as liberal thinkers such as Simone de Beauvoir, Orlando Patterson, and Bill McKibben, along with other “curated sources.”

    So far, interest in developing more politically aligned AI bots has fueled political divisions. Some conservative organizations are already building competitors for ChatGPT. For example, the social network Gab, known for its far-right user base, says it is working on AI tools with “the ability to freely generate content without the constraints of liberal propaganda wrapped tightly around its code.”

    Research suggests that language models can subtly influence users’ moral perspectives, so any political skews they have can have consequences. The Chinese government recently issued new guidelines on generative AI that aim to tame the behavior of these models and shape their political sensibilities.

    OpenAI has warned that more capable AI models may have “greater potential to reinforce entire ideologies, worldviews, truths and falsehoods”. In February, the company said in a blog post that it would explore developing models for users to determine their values.

    Rozado, who says he has not yet spoken to Musk about his project, is trying to provoke reflection rather than create bots that spread a certain worldview. “Hopefully as a society we can … learn to create AIs that are focused on building bridges rather than dividing,” he says.

    Rozado’s goal is admirable, but the problem of getting through the fog of political division on what is objectively true—and teaching it to language models—might be the biggest obstacle.

    ChatGPT and similar conversation bots are built on complex algorithms that are fed massive amounts of text and are trained to predict which word should follow a sequence of words. That process can generate remarkably coherent output, but it can also capture many subtle biases in the training materials they consume. Equally important, these algorithms have not learned to understand objective facts and tend to make things up.