OpenAI partners with Broadcom to design its own AI chips and reduce its dependence on Nvidia

OpenAI partners with Broadcom to design its own AI chips and reduce dependency on Nvidia

As anticipated, OpenAI has announced a major strategic partnership with Broadcom to design and produce its own dedicated AI chips.

This collaboration marks a significant advancement in the company’s strategy: reducing its reliance on Nvidia and securing the computational power needed for its most demanding models, from ChatGPT to Sora, as well as future superintelligent AI projects.

Goal: Integrating Intelligence Directly Into Silicon

In its announcement, OpenAI stated that designing its own processors would allow it to “transcribe the expertise gained in developing advanced models directly into the hardware“, paving the way for new forms of performance and energy efficiency.

This approach aims to bridge the gap between software and hardware, a trend already embraced by giants like Apple with its M series chips, and Google with its Tensor Processing Units (TPUs).

In essence, OpenAI seeks to build a tailored hardware ecosystem for its AI models—capable of adapting to their needs rather than the other way around.

A Colossal Plan: 10 Gigawatts of AI Accelerators

The partnership with Broadcom includes the deployment of 10 gigawatts of customized accelerators, an energy capacity comparable to ten nuclear reactors. The initial equipment is expected to be installed in the second half of 2026, with full production targeted before the end of 2029.

According to Sam Altman, CEO and co-founder of OpenAI, “This agreement is a critical step in building the infrastructure needed to unleash the potential of AI and benefit businesses and users worldwide“.

This partnership adds to two existing agreements signed by OpenAI: a 6-gigawatt contract with AMD and another 10-gigawatt agreement with Nvidia, aimed at enhancing the computing capacity of its global data centers. Until recently, OpenAI had relied almost exclusively on Microsoft Azure’s infrastructure for its AI computing. Revising this agreement has allowed the company to diversify its partners and strengthen its hardware sovereignty.

A Global Movement Against Dependency on Nvidia

OpenAI is not alone in this endeavor. Google, Meta, Amazon, and Microsoft are also developing their own AI chips to secure their supply chains in the face of the global GPU shortage and rising costs.

While Nvidia remains the undisputed leader with its H100 and B200 GPUs, the proliferation of these customized projects fosters the emergence of a new industrial ecosystem where players like Broadcom play a crucial role.

These collaborations enable companies to optimize performance for specific use cases—text generation, video creation, simulation, or training multimodal models—while also reducing costs and energy consumption.

Toward a Sovereign AI Infrastructure

By investing in its own chips, OpenAI aims to construct an integrated AI infrastructure that spans from models to hardware, accelerating the development of its future so-called “superintelligent” systems.

This strategic shift illustrates the new maturity phase of the AI sector, where the challenge lies not only in the size of models but in the complete control of the technological chain.


Scroll to Top