On Monday, Ars Technica hosted our virtual Ars Frontiers conference. In our fifth panel, we covered “The Lightning Onset of AI – What’s Suddenly Changed?” The panel featured a conversation with Paige Bailey, Lead Product Manager for Generative Models at Google DeepMind, and Haiyan Zhang, General Manager of Gaming AI at Xbox, moderated by Ars Technica’s AI reporter, Benj Edwards.
The panel was originally streamed live and you can now watch a recording of the entire event on YouTube. The introduction of the “Lightning AI” part begins at 2:26:05 in the broadcast.
Since “AI” is a vague term, meaning different things in different contexts, we started the discussion by thinking about the definition of AI and what it means to the panelists. Bailey said: “I like to think of AI as helping to derive patterns from data and use them to predict insights…
Zhang agreed, but from a video game point of view, she also sees AI as an evolving creative force. For her, AI isn’t just about analyzing, pattern-finding, and classifying data; it is also developing capabilities in creative language, image generation, and coding. Zhang believes this transformative power of AI can elevate and inspire human inventiveness, especially in video games, which she considers “the ultimate expression of human creativity”.
We then dived into the panel’s main question: What has changed that has led to this new era of AI? Is it all hype, perhaps based on ChatGPT’s high visibility, or have there been some major technical breakthroughs that have brought us this new wave?
Zhang pointed out the developments in AI techniques and the huge amounts of data now available for training: “We have seen breakthroughs in model architecture for transformer models, as well as the recursive autoencoder models, and also the availability of large data sets. then train that and couple that with third, the availability of hardware like GPUs, MPUs to be able to actually take the models to ingest the data and to be able to train them in new computing capabilities.”
Bailey echoed these sentiments, adding a noteworthy mention of open-source contributions: “We also have this vibrant community of open-source tinkerers who are open-sourcing models, models like LLaMA, refining them with very high-quality instruction tuning. and RLHF datasets.”
When asked to elaborate on the importance of open source collaborations in accelerating AI advancements, Bailey mentioned the widespread use of open source training models such as PyTorch, Jax, and TensorFlow. She also reaffirmed the importance of sharing best practices, stating, “I definitely think this machine learning community only exists because people share their ideas, their insights, and their code.”
When asked about Google’s plans for open source models, Bailey pointed to existing Google Research resources on GitHub and highlighted their collaboration with Hugging Face, an online AI community. “I don’t want to give away anything that could come through the pipeline,” she said.
Generative AI on game consoles, AI risks
As part of a conversation about the advancements in AI hardware, we asked Zhang how long it would take for generative AI models to run locally on consoles. She said she was excited about the prospect, noting that dual cloud client setup comes first: “I think it will be a combination of working on the AI to do inference in the cloud and working in partnership with local inference for us to bring the best player experiences to life.”
Bailey pointed to progress in shrinking Meta’s LLaMA language model to run on mobile devices, hinting that a similar path forward could open up the possibility of using AI models on game consoles as well: “I’d love to see a hyper-personalized large having a language model running on a mobile device, or running on my own gaming console, that might make a boss particularly hard for me to beat, but that might be easier for someone else to beat.
Moving on, we asked if a generative AI model runs locally on a smartphone, will that take Google out of the equation? “I really think there’s probably room for a variety of options,” Bailey said. “I think there should be options available for all of these things to coexist meaningfully.”
When discussing the social risks of AI systems, such as misinformation and deepfakes, both panelists said their respective companies are committed to responsible and ethical AI use. “At Google, we care deeply about making sure the models we produce are responsible and behave as ethically as possible. And we involve our responsible AI team from day zero, when we train models from collecting our data, to make sure the right mix is made before training,” explains Bailey.
Despite her previous enthusiasm for open source and locally managed AI models, Baily said that API-based AI models that only run in the cloud can generally be more secure: “I think there is a significant risk of models being abused in the hands of people who may not necessarily understand or be aware of the risk, which is also part of the reason why it sometimes helps to choose APIs over open source models.”
Like Bailey, Zhang also discussed Microsoft’s corporate approach to responsible AI, but she also made comments about gaming-specific ethical challenges, such as ensuring AI features are inclusive and accessible.