Skip to content

Meta releases Llama 3.2 and gives its AI a voice

    Mark Zuckerberg announced today that Meta, his social-media-turned-metaverse-turned-artificial-intelligence conglomerate, will upgrade its AI assistants to give them an array of celebrity voices, including those of Dame Judi Dench and John Cena. The most important upgrade for Meta’s long-term ambitions, however, is the new ability for its models to see users’ photos and other visual information.

    Meta also announced Llama 3.2 today, the first version of its free AI models with visual capabilities, expanding their usefulness and relevance for robotics, virtual reality, and so-called AI agents. Some versions of Llama 3.2 are also the first to be optimized to run on mobile devices. This could help developers create AI-powered apps that run on a smartphone and use the camera or look at the screen to interact with apps on your behalf.

    “This is our first open source, multimodal model, and it's going to enable a lot of interesting applications that require visual understanding,” Zuckerberg said onstage at Connect, a Meta event held today in California.

    Given Meta’s massive reach across Facebook, Instagram, WhatsApp and Messenger, the assistant upgrade could give many people their first taste of a new generation of more vocal and visually capable AI helpers. Meta said today that more than 180 million people already use Meta AI, as the company’s AI assistant is called, each week.

    Meta has recently given its AI a more prominent role in its apps, such as by making it part of the search bar in Instagram and Messenger. The new celebrity options available to users also include Awkwafina, Keegan Michael Key and Kristen Bell.

    Meta previously gave text-based assistants celebrity personas, but the characters didn’t gain much traction. In July, the company launched a tool called AI Studio that lets users create chatbots with any persona they choose. Meta says the new voices will be available to users in the US, Canada, Australia and New Zealand over the next month. The Meta AI image capabilities are rolling out in the US, but the company didn’t say when the features might appear in other markets.

    The new version of Meta AI can also provide feedback and information about users’ photos; if you’re not sure which bird you’ve taken a photo of, for example, it can tell you the species. And it can help with image editing, for example by adding new backgrounds or details on request. Google released a similar tool for its Pixel phones and for Google Photos in April.

    Meta AI’s new capabilities are powered by an improved version of Llama, Meta’s core large language model. The free model announced today could also have broad impact, given how widely the Llama family has already been adopted by developers and startups.

    Unlike OpenAI’s models, Llama can be downloaded and run locally for free, though there are some restrictions on large-scale commercial use. Llama can also be more easily refined or customized with additional training for specific tasks.