Skip to content

The new chatbots can change the world Can you trust them?

    This month, Jeremy Howard, an artificial intelligence researcher, introduced an online chatbot called ChatGPT to his 7-year-old daughter. It had been released a few days earlier by OpenAI, one of the world’s most ambitious AI labs.

    He told her to ask the experimental chatbot whatever came to mind. She asked what trigonometry was good for, where black holes came from, and why chickens hatched their eggs. Each time it answered in clear, well-punctuated prose. When she asked for a computer program that could predict the trajectory of a ball thrown through the air, she got it.

    Over the course of the next few days, Dr. Howard – a data scientist and professor whose work inspired the creation of ChatGPT and similar technologies – envisions the chatbot as a new kind of personal tutor. It could teach his daughter math, science, and English, not to mention a few other important lessons. Chief among them: don’t believe everything you are told.

    “It’s great to see her learning like this,” he said. “But I also told her, don’t trust everything it gives you. It can make mistakes.”

    OpenAI is one of many companies, academic labs and independent researchers working to build more advanced chatbots. These systems can’t chat exactly like a human, but they often appear to be. They can also retrieve and repackage information at a speed that humans never could. They can be thought of as digital assistants, like Siri or Alexa, that better understand what you’re looking for and give it to you.

    After the release of ChatGPT – which has been used by over a million people – many experts believe these new chatbots are poised to reinvent or even replace internet search engines like Google and Bing.

    They can provide information in tight sentences, rather than long lists of blue links. They explain concepts in ways that people can understand. And they can provide facts, while also generating business plans, thesis topics, and other new ideas from scratch.

    “You now have a computer that can answer any question in a way that makes sense to a human,” said Aaron Levie, CEO of a Silicon Valley company, Box, and one of several executives exploring how these chatbots will change. the technological landscape. “It can extrapolate and merge ideas from different contexts.”

    The new chatbots do this with what seems like complete confidence. But they don’t always tell the truth. Sometimes they even fail at simple arithmetic. They mix fact with fiction. And as they get better and better, people can use them to generate and spread falsehoods.

    Google recently built a system specifically for conversation called LaMDA, or Language Model for Dialogue Applications. This spring, a Google technician claimed it was sensitive. It wasn’t, but it captured the public’s imagination.

    Aaron Margolis, a data scientist in Arlington, Virginia, was among the limited number of people outside of Google who were allowed to use LaMDA through an experimental Google app, AI Test Kitchen. He was constantly amazed at his talent for open conversation. It kept him busy. But he cautioned that it could be a bit of a fabulist — as might be expected from a system trained on massive amounts of information posted on the Internet.

    “What it gives you is kind of like an Aaron Sorkin movie,” he said. Mr. Sorkin wrote “The Social Network,” a movie that has often been criticized for stretching the truth about Facebook’s origins. “Parts of it will be true, and parts will not be true.”

    He recently asked both LaMDA and ChatGPT to chat with him as if it were Mark Twain. When asked, LaMDA was quick to describe a meeting between Twain and Levi Strauss, saying the writer had worked for the blue jeans magnate while living in San Francisco in the mid-1800s. It seemed true. But it wasn’t. Twain and Strauss lived in San Francisco at the same time, but they never worked together.

    Scientists call that problem “hallucination.” Like a good storyteller, chatbots have a way of turning what they’ve learned into something new – regardless of whether it’s true.

    LaMDA is what artificial intelligence researchers call a neural network, a mathematical system loosely modeled after the network of neurons in the brain. This is the same technology that translates between French and English on services like Google Translate and identifies pedestrians as self-driving cars navigating city streets.

    A neural network learns skills by analyzing data. For example, by pointing out patterns in thousands of cat photos, it can learn to recognize a cat.

    Five years ago, researchers at Google and labs like OpenAI began designing neural networks that analyzed massive amounts of digital text, including books, Wikipedia articles, news stories, and online chat logs. Scientists call them “great language models.” By identifying billions of different patterns in the way people connect words, numbers and symbols, these systems learned to generate text on their own.

    Their ability to generate language surprised many researchers in the field, including many of the researchers who built them. The technology could mimic what humans had written and combine disparate concepts. You could ask it to write a “Seinfeld” scene where Jerry learns an esoteric math technique called a bubble sorting algorithm – and it would.

    With ChatGPT, OpenAI has been working to refine the technology. It doesn’t do as well on free-flowing conversations as Google’s LaMDA. It is designed to work more like Siri, Alexa and other digital assistants. Like LaMDA, ChatGPT is trained on a sea of ​​digital text pulled from the Internet.

    As people tested the system, it asked them to rate the answers. Were they convincing? Were they useful? Were they truthful? Then, through a technique called Reinforcement Learning, it used the assessments to fine-tune the system and more precisely define what it would and wouldn’t do.

    “This allows us to get to the point where the model can communicate with you and admit when it’s wrong,” said Mira Murati, OpenAI’s chief technology officer. “It can reject something that is inappropriate, and it can challenge a question or premise that is incorrect.”

    The method was not perfect. OpenAI warned users of ChatGPT that it “may occasionally generate incorrect information” and “produce malicious instructions or biased content”. But the company plans to continue refining the technology, reminding people who use it that it’s still a research project.

    Google, Meta and other companies are also addressing accuracy issues. Meta recently removed an online preview of its chatbot, Galactica, because it repeatedly generated incorrect and biased information.

    Experts have warned that companies cannot control the fate of these technologies. Systems like ChatGPT, LaMDA and Galactica are based on ideas, research papers and computer code that have been freely circulating for years.

    Companies like Google and OpenAI can advance the technology faster than others. But their latest technologies have been reproduced and widely distributed. They cannot prevent people from using these systems to spread misinformation.

    Just as Dr. Howard hoped his daughter would learn not to trust everything she read on the Internet, he hoped society would learn the same lesson.

    “You could program millions of these bots to look like humans, with conversations designed to convince people of a certain point of view,” he said. “I have been warning about this for years. Now it is clear that this is just wait and see.”