Google DeepMind's chief scientist Jeff Dean says the model will gain additional computing power, writing on X: “we see promising results as we increase the inference time calculation!” The model works by pausing to consider multiple related questions before providing what it believes is the most accurate answer.
Since OpenAI entered the “reasoning” field in September with o1-preview and o1-mini, several companies have rushed to achieve feature parity with their own models. For example, DeepSeek launched DeepSeek-R1 in early November, while Alibaba's Qwen team released its own “reasoning model,” QwQ, earlier this month.
Although some argue that reasoning models can help solve complex mathematical or academic problems, these models may not be for everyone. While they perform well on some benchmarks, questions remain about their actual usability and accuracy. Furthermore, the high computational costs required to run reasoning models have raised doubts about their long-term viability. Those high costs are why OpenAI's ChatGPT Pro, for example, costs $200 per month.
Still, it seems like Google is serious about pursuing this particular AI technique. Logan Kilpatrick, a Google employee in its AI Studio, called it “the first step in our reasoning journey” in a post on X.