ChatGPT and his brothers are both surprisingly smart and disappointingly stupid. Of course they can generate beautiful poems, solve scientific puzzles and debug spaghetti code. But we know that they often make up, forget and act like crazy.
Inflection AI, a company founded by researchers who previously worked on large artificial intelligence projects at Google, OpenAI and Nvidia, built a bot called Pi that appears to make fewer blunders and is more adept at social conversations.
Inflection designed Pi to address some of the problems facing today’s chatbots. Programs like ChatGPT use artificial neural networks that try to predict which words should follow a piece of text, such as an answer to a user’s question. With enough training on billions of lines of text written by humans, supported by powerful computers, these models are able to come up with coherent and relevant answers that feel like real conversation. But they also make things up and go off the rails.
Mustafa Suleyman, Inflection’s CEO, says the company carefully curated Pi’s training data to minimize the chance of toxic language creeping into its responses. “We’re pretty selective about what goes into the model,” he says. “We’re taking a lot of information that’s available on the open web, but not absolutely everything.”
Suleyman, co-founder of the AI company Deepmind, which is now part of Google, also says limiting the length of Pi’s answers reduces, but doesn’t eliminate, the possibility of factual errors.
Based on my own time chatting with Pi, the result is compelling, albeit more limited and less useful than ChatGPT and Bard. Those chatbots got better at answering questions through additional training in which people rated the quality of their answers. That feedback is used to steer the bots towards more satisfying responses.
Suleyman says Pi was trained in a similar way, but with an emphasis on being friendly and supportive, but without a human-like personality, which could leave users confused about the program’s capabilities. Chatbots that take on a human persona have already proven problematic. Last year, a Google engineer controversially claimed that the company’s LaMDA AI model, one of the first programs to demonstrate just how smart and attractive large AI language models could be, could be sentient.
Pi is also able to keep track of all of its conversations with a user, giving it a kind of long-term memory that ChatGPT lacks and is intended to add consistency to its chats.
“Good conversation is about responding to what someone says, asking clarifying questions, being curious, being patient,” says Suleyman. “It’s there to help you think, rather than give you strong directional advice, to help you unpack your thoughts.”
Pi takes on a chatty, caring personality, even though he doesn’t pretend to be human. It often asked how I was doing and often offered words of encouragement. Pi’s short responses would also make it work well as a voice assistant, where long-winded answers and errors are especially jarring. You can try talking to it yourself on Inflection’s website.
The incredible hype around ChatGPT and similar tools means that many entrepreneurs are hoping to strike it rich in the field.
Suleyman worked as a manager within the Google team on the LaMDA chatbot. Google was hesitant to release the technology, to the frustration of some who worked on it who believed it had great commercial potential.