Skip to content

Google has a plan to keep the new AI from getting dirty and rude

    Silicon Valley CEOs usually focus on the positives when announcing the next big thing of their business. In 2007, Apple’s Steve Jobs praised the “revolutionary user interface” and “breakthrough software” of the first iPhone. Google CEO Sundar Pichai took a different turn at his company’s annual conference on Wednesday, announcing a beta test of Google’s “most advanced conversational AI yet.”

    Pichai said the chatbot, known as LaMDA 2, can talk about any topic and performed well in tests with Google employees. He announced an upcoming app called AI Test Kitchen that will make the bot available for outsiders to try. But Pichai added a stark warning. “While we’ve improved security, the model may still generate inaccurate, inappropriate or abusive responses,” he said.

    Pichai’s hesitant pitch illustrates the mixture of excitement, confusion and concern swirling around a series of recent breakthroughs in the capabilities of machine learning software that processes language.

    The technology has already enhanced the power of autocomplete and web searches. It has also created new categories of productivity apps that help employees by generating fluid text or programming code. And when Pichai first unveiled the LaMDA project last year, he said it could eventually be used in Google’s search engine, virtual assistant and workplace apps. But for all that dazzling promise, it’s unclear how to reliably control these new AI word makers.

    Google’s LaMDA, or Language Model for Dialogue Applications, is an example of what machine learning researchers call a large language model. The term is used to describe software that builds a statistical sense of language patterns by processing massive amounts of text, usually sourced online. LaMDA, for example, was initially trained with over a trillion words from online forums, Q&A sites, Wikipedia and other web pages. This massive amount of data helps the algorithm perform tasks such as generating text in different styles, interpreting new text, or functioning as a chatbot. And if these systems work, they won’t be like the frustrating chatbots you use today. Right now, Google Assistant and Amazon’s Alexa can only perform certain pre-programmed tasks, and they bend when given something they don’t understand. What Google is now proposing is a computer that you can actually talk to.

    Chat logs released by Google show that LaMDA can be — at least sometimes — informative, thought-provoking, or even funny. Testing the chatbot prompted Google vice president and AI researcher Blaise Agüera y Arcas to write a personal essay last December arguing that the technology could provide new insights into the nature of language and intelligence. “It can be very difficult to shake the idea that there is a ‘who’ and not a ‘it’ on the other side of the screen,” he wrote.

    Pichai made it clear when he announced the first version of LaMDA last year, and again on Wednesday, that he sees the potential to open up a path to speech interfaces far broader than the often frustratingly limited capabilities of services like Alexa, Google Assistant and Apple’s Siri. . Now the leaders of Google seem convinced that they have finally found the way to make computers that you can actually talk to.

    At the same time, major language models have proven to be fluent in dirty, mean, and downright racist. Scraping billions of words of text from the web inevitably creates a lot of unsavory content. OpenAI, the company behind language generator GPT-3, has reported that its creation may perpetuate gender and race stereotypes, and it is asking customers to implement filters to filter out unsavory content.