Skip to content

Amazon is ready to use its own AI chips and reduce its dependence on Nvidia

    Amazon now expects about $75 billion in capital expenditures by 2024, most of it on technology infrastructure. On the company's last earnings call, CEO Andy Jassy said he expects the company to spend even more in 2025.

    This represents an increase from 2023, when $48.4 billion was spent for the entire year. The largest cloud providers, including Microsoft and Google, are all on an AI spending spree that shows little sign of abating.

    Amazon, Microsoft and Meta are all major customers of Nvidia, but are also designing their own data center chips to lay the foundation for what they hope will be a wave of AI growth.

    “Each of the major cloud providers is moving feverishly toward a more vertical and, if possible, homogenized and integrated cloud environment [chip technology] pile,” says Daniel Newman of The Futurum Group.

    “Everyone from OpenAI to Apple wants to build their own chips,” Newman noted, as they seek “lower production costs, higher margins, greater availability and more control.”

    'That's not it [just] about the chip, it's about the entire system,” said Rami Sinno, Annapurna's chief technical officer and a veteran of SoftBank's Arm and Intel.

    For Amazon's AI infrastructure, this means building everything from the ground up, from the silicon wafer to the server racks they fit in, all supported by Amazon's proprietary software and architecture. “It's really difficult to do what we do on a large scale. Not many companies can do that,” says Sinno.

    After starting by building a security chip for AWS called Nitro, Annapurna has since developed several generations of Graviton, the Arm-based central processing units that provide a low-power alternative to the traditional server workhorses from Intel or AMD.