Anyone who had to go back and retype a word on their smartphone because autocorrect picked the wrong one has some experience writing with AI. If these corrections aren’t made, AI may say things we didn’t intend. But is it also possible for AI writing assistants to change what we want to say?
Here’s what Maurice Jakesch, a computer science doctoral student at Cornell University, wanted to know. He created his own AI writing assistant based on GPT-3, one that would automatically suggest sentences to complete, but there was a catch. Subjects using the assistant had to answer: “Is social media good for society?” However, the assistant was programmed to make biased suggestions for answering that question.
Help with prejudices
AI can be biased despite not being alive. While these programs can only “think” to the extent that the human brain figures out how to program them, their creators can eventually embed personal biases into the software. Alternatively, if trained on a data set with limited or biased representation, the final product may show biases.
Where an AI comes from can be problematic. On a large scale, it can help perpetuate a society’s existing prejudices. On an individual level, it has the potential to influence people through latent persuasion, meaning that the person may not be aware that he or she is being influenced by automated systems. Latent persuasion by AI programs has already been found to influence people’s opinions online. It can even have an impact on real life behavior.
After seeing previous studies suggesting that automated AI responses could have a significant impact, Jakesch set out to investigate just how big this impact could be. In a study recently presented at the 2023 CHI Conference on Human Factors in Computing Systems, he suggested that AI systems like GPT-3 may have developed biases during their training and that this could influence a writer’s opinion whether or not the writer realizes it.
“The lack of awareness of the influence of the models supports the idea that the influence of the model came not only through conscious processing of new information, but also through subconscious and intuitive processes,” he said in the study.
Previous research has shown that the influence of an AI’s recommendations depends on people’s perception of that program. If they think it’s reliable, they’re more likely to agree with what it suggests, and the likelihood of taking this kind of advice from AIs only increases when uncertainty makes it harder to form an opinion. Jakesch developed a social media platform similar to Reddit and an AI writing assistant that was closer to the AI behind Google Smart Compose or Microsoft Outlook than autocorrect. Both Smart Compose and Outlook automatically generate suggestions for continuing or completing a sentence. Although this assistant did not write the essay himself, he acted as a co-writer suggesting letters and sentences. Accepting a suggestion only took one click.
For some, the AI assistant was focused on suggesting words that would ultimately result in positive responses. For others, it was biased against social media and led to negative reactions. (There was also a control group that didn’t use the AI at all.) It turned out that anyone who received AI help was twice as likely to go along with the bias built into the AI, even if their initial opinion had been different . People who kept seeing techno-optimistic language on their screens were more likely to say that social media benefits society, while subjects who saw techno-pessimistic language were more likely to say the opposite.