Skip to content

AI masters language. Should we trust what it says?

    But since the fluency of GPT-3 has dazzled many observers, its approach to the large language model has also received a lot of criticism in recent years. Some skeptics argue that the software is only capable of blind imitation – that it imitates the syntactic patterns of human language, but is incapable of generating its own ideas or making complex decisions, a fundamental limitation that will prevent the LLM from failing. approach grows into something akin to human intelligence. For these critics, GPT-3 is just the latest shiny object in a long history of AI hype, channeling research dollars and attention into what will ultimately prove to be a dead end, keeping other promising approaches from maturing. Other critics believe that software like GPT-3 will forever be compromised by the prejudice and propaganda and misinformation in the data it is trained on, meaning it will always be irresponsible to use it for more than parlor tricks.

    Wherever you end up in this debate, the pace of recent improvements in major language models makes it hard to imagine that they won’t be deployed commercially in the coming years. And that raises the question of how they – and for that matter the other breakneck developments of AI – should be unleashed on the world. In the rise of Facebook and Google, we’ve seen how dominance in a new realm of technology can quickly lead to astonishing power over society, and AI threatens to be even more transformative than social media in its ultimate effects. What is the right kind of organization to build and own something of such magnitude and ambition, with such promise and such potential for abuse?

    Or should we build it at all?

    The Origin of OpenAI date to July 2015, when a small group of tech stars gathered for a private dinner at the Rosewood Hotel on Sand Hill Road, the symbolic heart of Silicon Valley. The dinner took place amid two recent developments in the technology world, one positive and one more disturbing. On the one hand, radical advances in computing power — and some new breakthroughs in neural network design — had created a palpable sense of excitement in the field of machine learning; there was a sense that the long “AI winter,” the decades in which the field failed to live up to its early hype, was finally beginning to thaw. A group at the University of Toronto had trained a program called AlexNet to identify classes of objects in pictures (dogs, castles, tractors, tables) with a level of accuracy much higher than any neural network had achieved before. Google quickly dove in to hire the creators of AlexNet, while simultaneously acquiring DeepMind and starting its own initiative called Google Brain. The mainstream adoption of intelligent assistants like Siri and Alexa showed that even scripted agents can be breakthrough hits for consumers.

    But during that same time period, there was a seismic shift in public attitudes toward Big Tech, with once popular companies like Google or Facebook criticized for their near monopoly powers, their amplification of conspiracy theories, and their inexorable shift of our attention to algorithmic feed. Long-standing fears about the dangers of artificial intelligence appeared in opinion pages and on the TED stage. Nick Bostrom of the University of Oxford published his book ”Superintelligence”, in which he introduces a series of scenarios where advanced AI can deviate from the interests of humanity, with potentially disastrous consequences. In late 2014, Stephen Hawking announced to the BBC that “the development of full artificial intelligence could spell the end of humanity”. Just this time, perhaps the algorithms wouldn’t just sow polarization or sell our attention to the highest bidder — they could destroy humanity itself. And again, all the evidence suggested this power would be controlled by a few Silicon Valley mega-corporations.

    The agenda for dinner on Sand Hill Road that night in July was nothing but ambitious: figuring out the best way to steer AI research toward the most positive outcome, taking into account both the short-term negative consequences of the Web 2.0 era and the existential threats in the long run. From that dinner, a new idea began to take shape—one that would soon become a full-time obsession for Y Combinator’s Sam Altman and Greg Brockman, who had recently left Stripe. Interestingly enough, the idea was not so much technological as it was organizational: unleashing AI on the world in a safe and cost-effective way would require innovation at the level of governance and incentives and stakeholder involvement. The technical path to what the field calls artificial general intelligence, or AGI, was not yet clear to the group. But Bostrom and Hawking’s disturbing predictions convinced them that the achievement of human intelligence through AIs would consolidate an astonishing amount of power and moral burden in whoever finally managed to invent and control them.

    In December 2015, the group announced the creation of a new entity called OpenAI. Altman had signed on as chief executive of the company, with Brockman overseeing the technology; Another dinner attendee, AlexNet co-creator Ilya Sutskever, had been recruited by Google to become the head of research. (Elon Musk, who also attended the dinner, joined the board of directors, but left in 2018.) In a blog post, Brockman and Sutskever explained the scope of their ambition: “OpenAI is a non-profit artificial- intelligence research company,” they wrote. “Our goal is to advance digital intelligence in a way that is likely to benefit humanity as a whole, not limited by the need to generate financial returns.” They added: “We believe AI is an extension must be of the individual human will and, in the spirit of freedom, distributed as broadly and evenly as possible.”

    The founders of OpenAI would release a public charter three years later, outlining the core principles behind the new organization. The document was easily interpreted as a not-so-subtle dig at Google’s early days ‘Don’t be evil’ slogan, an admission that maximizing the social benefits — and minimizing the harm — of new technology isn’t always easy. that was. simple calculation. While Google and Facebook had achieved world domination through closed-source algorithms and proprietary networks, OpenAI’s founders vowed to go the other way and freely share new research and code with the world.