Skip to content

Gary Marcus used to call AI stupid, now he calls it dangerous

    Back then – just months ago – Marcus’s bickering was technical. But as large language models have become a global phenomenon, his focus has shifted. The gist of Marcus’ new message is that the chatbots from OpenAI, Google and others are dangerous entities whose powers will lead to a tsunami of misinformation, security bugs and slanderous “hallucinations” that automate slander. This seems to be contradictory. For years, Marcus had argued that the AI ​​builders’ claims are exaggerated. Why is AI now so formidable that society must now restrict it?

    Always chatty, Marcus has an answer: “Yes, I’ve been saying that for years [LLMs] are actually pretty dumb, and I still believe that. But there is a difference between power and intelligence. And we suddenly give them a lot of power.” In February, he realized the situation was alarming enough to devote most of his energy to tackling the problem. Ultimately, he says, he would like to lead a non-profit organization dedicated to getting the best out of AI and avoiding the worst.

    Marcus argues that to counter all possible damage and destruction, policymakers, governments and regulators must slow down the development of AI. Along with Elon Musk and dozens of other scientists, policy geeks, and just plain deranged observers, he signed the now-famous petition demanding a six-month hiatus in training new LLMs. But he admits he doesn’t really think such a break would make a difference and that he signed on mainly to join the community of AI critics. Instead of a training time-out, he prefers a break to stake new models or iteration of current models. This should presumably be forced on companies as there is fierce, almost existential competition between Microsoft and Google, with Apple, Meta, Amazon and countless startups looking to get into the game.

    Marcus has an idea who might do the enforcement. He has recently urged that the world immediately needs “a global, neutral, non-profit international agency for AI,” which would be referred to by an acronym that sounds like a scream (Yay!).

    As he outlined in an op-ed he co-authored the Economist, such a body could operate as the International Atomic Energy Agency, which conducts audits and inspections to identify nascent nuclear programs. Presumably this agency would check algorithms to make sure they don’t contain bias or promote misinformation or take over power grids while we’re not looking. While it may seem like a stretch to imagine the United States, Europe, and China all working together on this, perhaps the threat of an alien, if native, intelligence overthrowing our species may lead them to act in the interest of Team Human. Hey, it worked with that other global threat, climate change! uh…

    In any case, the discussion about controlling AI will gain momentum as the technology becomes ever more deeply embedded in our lives. So expect to see a lot more from Marcus and plenty of other talking heads. And that’s okay. Discussions about what to do with AI is healthy and necessary, even though the rapidly evolving technology can still develop, regardless of the measures we take scrupulously and too late. The meteoric rise of ChatGPT into a business tool, entertainment device, and all-purpose confidant indicates that, scary or not, we want this stuff. Like any other huge technological advancement, superintelligence seems destined to bring us irresistible benefits, even as it changes the workplace, our cultural consumption and, inevitably, us.