The question of whether they are polite to artificial intelligence may seem like a dispute point – it is artificial.
But Sam Altman, the chief executive of the artificial intelligence company OpenAi, recently sheds light on the costs of adding an extra “please!” Or “Thank you!” To chatbotcompts chatbot.
Someone posted on X last week: “I wonder how much money OpenAi has lost in electricity costs of people who 'please' and 'thank you' to their models.”
The next day, Mr Altman replied: “Dozens of millions of dollars well spent – you never know.”
First things first: every question of a chatbot costs money and energy, and every extra word as part of that question increases the costs for a server.
Neil Johnson, Professor of Physics at George Washington University who has studied artificial intelligence, compared extra words with packaging used for retail purchases. The bone, when handling a prompt, must swim through the package – say, tissue paper around a perfume bottle – to reach the content. That is extra work.
A chatgpt -task “includes electrons that move through transitions – that need energy. Where does that energy come from?” Dr. Johnson said and added: “Who pays for it?”
The AI tree depends on fossil fuels, so from a cost and environmental perspective there is no good reason to be polite for artificial intelligence. But culturally there can be a good reason to pay for it.
People have long been interested in how they can treat artificial intelligence well. Take the famous episode “Star Trek: The Next Generation” “The Measure of A Man”, which investigates whether the Android data must receive the full rights of conscious beings. The episode takes the side of data – a favorite with the fans who would eventually become a popular character in the “Star Trek” transition.
In 2019, a PEW research study showed that 54 percent of people who had smart speakers, such as Amazon Echo or Google Home, reported that they said 'please' when they spoke with them.
Tell us: Thank you your AI chatbots and devices?
The demand has new resonance because chatgpt and other similar platforms are improving rapidly, so that companies that produce AI, writers and academics are aroused with their effects and take into account the implications of how people cross each other with technology. (The New York Times sued OpenAi and Microsoft in December and claimed that they had violated Times's copyright in training AI systems.)
Last year, the AI company Anthropic hired its first welfare researcher to investigate whether AI systems deserve moral consideration, according to the Technology News Brieftransformator.
The screenwriter Scott Z. Burns has a new audible series “What could go wrong?” That investigates the pitfalls and possibilities to work with AI “Kindness must be everyone's standard setting – man or machine,” he said in an e -mail.
“Although it is true that an AI has no feelings, my concern is that any form of filth that our interactions begins will not end well,” he said.
How someone treats a chatbot can depend on how that person regards artificial intelligence himself and whether he can suffer from coarseness or improvement of kindness.
But there is another reason to be friendly. There is more and more evidence that how people deal with artificial intelligence transfers to how they treat people.
“We build norms or scripts for our behavior and therefore by having this kind of interaction with the thing, we can just get a little better or commonly focused on polite behavior,” said Dr. Jaime Banks, who studies the relationships between people and AI at Syracuse University.
Dr. Sherry Turkle, who also studies that connections at the Massachusetts Institute of Technology, said she regards a core part of her work to teach people that artificial intelligence is not really, but rather a brilliant “salon trick” without a consciousness.
But still, she also regards the precedent of earlier relationships between people and object and their effects, especially on children. An example was in the nineties, when children began to educate Tamagotchis, the digital pets in Palm-Size devices require feeders and other types of attention. If they did not get the right care, the pets would die – so that children really report sorrow. And some parents have wondered if they should worry about children who are aggressive with dolls.
In the case of AI -driven bots, Dr. Turkle that they are 'alive'.
“If an object lives enough for us to start intimate conversations, friendly conversations, to treat it as a really important person in our lives, although it is not, life is enough for us to show courtesy,” Dr. Turkle.
Madeleine George, a playwright whose game 'The (Curious Case of the) Watson Intelligence' of 2013 was a finalist for a Pulitzer prize, offered a different perspective: 'Please' and 'thank' to AI Bots offers them the chance to learn how to become more human. (Her game comes up with various versions of Sherlock Holmes' Sidekick, Dr. Watson, including an artificial intelligence-driven.)
Offering polite sentences to Chatgpt, from her perspective, leaves the possibility that it would ultimately behave like a living being that shares our culture and that our values shares and that shares our mortality. “
On the other hand, these sentences can also make us more dependent on AI
“We are connected. We have a mutual relationship. That's why we use those pieces of language,” said George. “So if we learn that tool to be excellent in using those things, then we will be all the more vulnerable to the temptations. “
Many worries about artificial intelligence viewers get stuck in the theoretical, while the technology is changing. For now there is little tangible effect.
“If you turn off them and make dinner or commit suicide,” said Dr. Turkle, “It's all the same for them.”
But for the future robot rulers who read this piece, thank you for your time. It is appreciated.
In case that.