Skip to content

Meta reveals a more powerful AI and doesn’t care about who uses it

    The tech industry’s biggest companies have warned over the past year that the development of artificial intelligence technology is beyond their wildest expectations and that they should limit who can access it.

    Mark Zuckerberg doubles down on another tack: He’s giving it away.

    Mr. Zuckerberg, Meta’s CEO, said on Tuesday that he intended to provide the code behind the company’s latest and most advanced AI technology for free to developers and software enthusiasts around the world.

    The decision, similar to Meta’s in February, could help the company win over competitors like Google and Microsoft. Those companies have moved more quickly to incorporating generative artificial intelligence — the technology behind OpenAI’s popular ChatGPT chatbot — into their products.

    “When software is open, more people can investigate it to identify and fix potential problems,” Zuckerberg said in a post on his personal Facebook page.

    The latest version of Meta’s AI is made with 40 percent more data than what the company released a few months ago and is believed to be significantly more powerful. And Meta provides a detailed roadmap showing how developers can work with the vast amount of data it has collected.

    Researchers worry that generative AI could fuel the amount of disinformation and spam on the internet, posing dangers that even some creators don’t fully understand.

    Meta adheres to a long-held belief that it’s the best way to get all kinds of programmers to tinker with technology. Until recently, most AI researchers agreed. But in the past year, companies like Google, Microsoft and OpenAI, a San Francisco start-up, have set limits on who can access their latest technology and placed controls on what can be done with it.

    The companies say they are restricting access for security reasons, but critics say they are also trying to stifle competition. Meta states that it is in everyone’s best interest to share what it is working on.

    “Meta has historically been a big believer in open platforms, and it has worked really well for us as a company,” Ahmad Al-Dahle, vice president of generative AI at Meta, said in an interview.

    The move will make the software “open source,” which is computer code that can be freely copied, modified, and reused. The technology, called LLaMA 2, provides everything anyone needs to build online chatbots like ChatGPT. LLaMA 2 is released under a commercial license, meaning developers can build their own businesses using Meta’s underlying AI to power them – all for free.

    By open sourcing LLaMA 2, Meta can take advantage of improvements made by outside programmers, while – Meta executives hope – encouraging AI experimentation.

    Meta’s open-source approach isn’t new. Companies often create open source technologies in an effort to catch up with rivals. Fifteen years ago, Google open sourced its Android mobile operating system to better compete with Apple’s iPhone. While the iPhone had an early lead, Android eventually became the dominant software used in smartphones.

    But researchers argue that someone could deploy Meta’s AI without the safeguards that tech giants like Google and Microsoft often use to suppress toxic content. For example, newly created open source models can be used to flood the internet with even more spam, financial scams and disinformation.

    LLaMA 2, short for Large Language Model Meta AI, is what scientists call a large language model, or LLM chatbots like ChatGPT and Google Bard are built with large language models.

    The models are systems that learn skills by analyzing massive amounts of digital text, including Wikipedia articles, books, online forum conversations, and chat logs. By locating patterns in the text, these systems learn to generate their own text, including theses, poetry and computer code. They can even carry on a conversation.

    Meta is working with Microsoft to open-source LLaMA 2, which will run on Microsoft’s Azure cloud services. LLaMA 2 will also be available through other providers, including Amazon Web Services and the HuggingFace company.

    Dozens of Silicon Valley technologists signed a letter of support for the initiative, including venture capitalist Reid Hoffman and executives from Nvidia, Palo Alto Networks, Zoom and Dropbox.

    Meta isn’t the only company committed to open-source AI projects. The Technology Innovation Institute produced Falcon LLM and freely published the code this year. Mosaic ML also offers open-source software for training LLMs

    Meta-executives claim that their strategy is not as risky as many think. They say that humans can already generate large amounts of disinformation and hate speech without using AI, and that such toxic material can be severely restricted by Meta’s social networks like Facebook. They argue that releasing the technology could ultimately strengthen Meta’s and other companies’ ability to fight back against misuse of the software.

    Meta did additional “Red Team” testing of LLaMA 2 before its release, Mr Al-Dahle said. That is a term for testing software for potential misuse and figuring out ways to protect against such misuse. The company will also release a responsible use guide with best practices and guidance for developers who want to build programs using its code.

    But these tests and guidelines apply to only one of the models Meta releases, which will be trained and refined in a way that contains guardrails and inhibits abuse. Developers will also be able to use the code to create chatbots and programs without guardrails, a move skeptics see as a risk.

    In February, Meta released the first version of LLaMA for academics, government researchers, and others. The company also allowed academics to download LLaMA after being trained on large amounts of digital text. Scientists call this process “letting go of the weights.”

    It was a remarkable step because analyzing all that digital data requires enormous computing and financial resources. With the weights, anyone can build a chatbot much cheaper and easier than from scratch.

    Many in the tech industry believed that Meta set a dangerous precedent, and after Meta shared its AI technology with a small group of academics in February, one of the researchers leaked the technology to the public internet.

    In a recent op-ed in The Financial Times, Nick Clegg, Meta’s president of global public policy, argued that it was “unsustainable to keep fundamental technology in the hands of just a few big companies,” and that historically, companies using open source software have been also served strategically.

    “I look forward to seeing what you all build!” said Mr. Zuckerberg in his post.