Skip to content

The boring future of generative AI

    This week at At its annual I/O developer conference in Mountain View, Google showcased a dizzying array of projects and products powered by or enhanced by AI. They include a new and improved version of his chatbot Bard, tools that let you write emails and documents or manipulate images, devices with built-in AI, and a chatbot-like experimental version of Google Search. For a full recap of the event, complete with insightful and witty commentary from my WIRED colleagues, visit our Google I/O live blog.

    Of course, Google’s great linchpin is largely fueled not by algorithms, but by generative AI FOMO.

    The appearance last November of ChatGPT – OpenAI’s remarkably smart but still rather flawed chatbot – coupled with Microsoft adding the technology to its Bing search engine a few months later, sent Google into panic. ChatGPT proved extremely popular with users, demonstrating new ways of serving information that threatened Google’s vice grip on the search business and its reputation as a leader in AI.

    The capabilities of ChatGPT and AI language algorithms, like the ones that power it, are so striking that some experts, including Geoffrey Hinton, a pioneering researcher who recently left Google, felt compelled to warn that we may be building systems that we will be difficult to control one day. OpenAI’s chatbot is often amazingly good at generating cohesive text on a given topic, summarizing information from the web, and even answering extremely tricky questions that require expert knowledge.

    And yet unfettered AI language models are also silver-tongued agents of chaos. They will happily make up facts, express unpleasant prejudices, and say unpleasant or disturbing things for the right reasons. Shortly after launch, Microsoft was forced to restrict Bing Chat’s capabilities to prevent such embarrassing misconduct, in part because the bot revealed its secret code name – Sydney – and a New York Times columnist about not loving his husband.

    Google has been working hard to tone down the chaotic streak of text-generating technology as it prepares its experimental search feature, announced yesterday, that responds to queries with chat-style responses that synthesize information from around the web.

    Google’s smarter version of search is impressively narrow-minded, refusing to use the first person or talk about their thoughts or feelings. It completely avoids topics that could be considered risky, refusing medical advice, or providing answers to potentially controversial topics such as US politics.

    Google deserves credit for curbing the wild side of generative chatbots. But in my tests, the new search interface felt incredibly tame compared to ChatGPT or Google’s own chatbot Bard.

    As the company moves the technology into more of its products, the generative AI revolution might turn out to be a lot less fun than you might expect from the early shock and awe of ChatGPT, a chatbot with an edgy charm. Gone are the wild ravings and fantasies of powerful AI bots. Instead, there are new ways to fill out spreadsheets, compose email courtesies, and find products to buy.

    Even if the warning from “AI doomers” about wandering AI proves to be overblown, it will be interesting to see how companies like Google and OpenAI balance the development of more powerful generative language models with the need to make them behave.

    Google has invested huge sums and large resources in AI in recent years, with CEO Sundar Pichai often touting the company as “AI first,” and the company is desperate to show that it can advance the technology faster than OpenAI. A key takeaway from Google’s stream of AI announcements was that the company will no longer hold back, just like the LaMDA chatbot that was announced long before ChatGPT appeared but went undisclosed.

    In March, some big names in AI research signed an open letter calling for a six-month pause in making machine learning systems more powerful than GPT-4, which powers ChatGPT. Pichai was not a signatory and said in his keynote speech yesterday that the company is currently training a new, more powerful language model called Gemini.

    A source at Google tells me that this new system will incorporate a range of recent developments from several major language models and may eclipse GPT-4. But don’t expect to experience the full power or charisma of Gemini. If Google applies the same chaos-taming methods as it did in its chat-like search experiment, it might just look like another surprisingly clever autocomplete.