Skip to content

How AI can make gaming better for all players

    When Google revealed Project Gameface, the company was proud to show off a hands-free, AI-powered gaming mouse that, according to the announcement, “allows people to control a computer’s cursor using their head movements and facial gestures.” While this may not be the first AI-based gaming tool, it was certainly one of the first to put AI in the hands of players, rather than developers.

    The project was inspired by Lancy Carr, a quadriplegic video game streamer who uses a head-tracking mouse as part of his gaming setup. After its existing hardware was lost in a fire, Google stepped in to create an open source, highly configurable, low-cost alternative to expensive replacement hardware, powered by machine learning. While AI’s wider existence is divisive, we wanted to find out if AI, when used for good, could be the future of gaming accessibility.

    It is important to define AI and machine learning to clearly understand how they work in Gameface. When we use the terms “AI” and “machine learning”, we mean both the same and different things.

    “AI is a concept,” Laurence Moroney, AI advocacy lead at Google and one of the brains behind Gameface, tells WIRED. “Machine learning is a technique that you use to implement that concept.”

    Thus, machine learning fits under the umbrella of AI along with implementations such as large language models. But where well-known applications such as OpenAI’s ChatGPT and StabilityAI’s Stable Diffusion are iterative, machine learning is characterized by learning and adapting without instruction, drawing conclusions from readable patterns.

    Moroney explains how this is applied to Gameface in a range of machine learning models. “The first was being able to detect where a face is in an image,” he says. “The second was, once you had a picture of a face, to be able to understand where obvious points (eyes, nose, ears, etc.) are.”

    After that, another model can map and decipher movements of those points, and map them to mouse input.

    It is an explicitly supportive implementation of AI, as opposed to the one often touted as making human input obsolete. Indeed, this is how Moroney suggests AI is best applied, to “amplify our ability to do things that weren’t feasible before.”

    This sentiment goes beyond Gameface’s potential to make gaming more accessible. AI, Moroney suggests, can have a major impact on accessibility for players, as well as how developers create accessibility solutions.

    “Anything that allows developers to become orders of magnitude more effective at solving types of problems that were previously unfeasible,” he says, “can only benefit accessibility or any other space.”

    This is something that developers are already beginning to understand. Artem Koblov, creative director of Perelesoq, tells WIRED that he “wants to see more resources focused on solving routine tasks, rather than creative inventions.”

    This allows AI to help with time-consuming technical processes. With the right applications, AI could create a leaner, more permissive development cycle where it both helps with the mechanical implementation of accessibility solutions and gives developers more time to consider them.

    “As a developer, you want to have as many tools as possible that can help make your job easier,” said Conor Bradley, creative director of Soft Leaf Studios. He points to improvements in current accessibility AI implementations, including “real-time text-to-speech and speech-to-text generation, and speech and image recognition.” And he sees opportunities for future developments. “Over time, I see more and more games leveraging these powerful AI tools to make our games more accessible.”