Skip to content

Google's CEO says more than 25% of new Google code is generated by AI

    On Tuesday, Google's CEO revealed that AI systems now generate more than a quarter of the new code for its products, with human programmers overseeing the computer-generated contributions. The statement, made during Google's Q3 2024 earnings call, highlights how AI tools are already having a significant impact on software development.

    “We also use AI internally to improve our coding processes, which increases productivity and efficiency,” Pichai said during the call. “Today, more than a quarter of all new code at Google is generated by AI and then reviewed and accepted by engineers. This helps our engineers do more and move faster.”

    Google developers aren't the only programmers using AI to assist with coding tasks. It's hard to get hard numbers, but according to Stack Overflow's 2024 Developer Survey, more than 76 percent of all respondents are “using or planning to use AI tools in their development process this year,” while 62 percent are actively using them . A 2023 GitHub survey found that 92 percent of US-based software developers “are already using AI coding tools, both inside and outside of work.”

    AI-assisted coding first emerged widely with GitHub Copilot in 2021, and the feature was widely released in June 2022. It used a special coding AI model from OpenAI called Codex, which was trained to both suggesting continuations on existing code as well as to create new code from scratch based on English instructions. Since then, AI-based coding has expanded dramatically, with increasingly better solutions from Anthropic, Meta, Google, OpenAI, and Replit.

    GitHub Copilot has also expanded in terms of capabilities. Yesterday, the Microsoft subsidiary announced that for the first time, developers can use non-OpenAI models such as Anthropic's Claude 3.5 and Google's Gemini 1.5 Pro to generate code within the application.

    While some tout the benefits of using AI in coding, the practice has also drawn criticism from those who worry that future software generated in part or mostly by AI could become riddled with hard-to-detect bugs and errors.

    According to a 2023 study from Stanford University, developers who used AI coding assistants tended to record more bugs while paradoxically believing their code was more secure. This finding was highlighted by Talia Ringer, a professor at the University of Illinois at Urbana-Champaign, who told Wired that “there are likely both benefits and risks” with AI-assisted coding, emphasizing that “more code does not mean better code is. .”