TLDR OpenAI is dissatisfied with some Nvidia chips for inference tasks and has been seeking alternatives since last year for about 10% of its inference computingTLDR OpenAI is dissatisfied with some Nvidia chips for inference tasks and has been seeking alternatives since last year for about 10% of its inference computing

OpenAI Seeks Nvidia Chip Alternatives as $100 Billion Investment Deal Stalls

2026/02/03 17:37
3 min read

TLDR

  • OpenAI is dissatisfied with some Nvidia chips for inference tasks and has been seeking alternatives since last year for about 10% of its inference computing needs
  • The $100 billion Nvidia investment deal in OpenAI, expected to close within weeks, has been delayed for months as negotiations continue
  • OpenAI has struck deals with AMD, Broadcom, and Cerebras Systems for alternative chips, particularly those with more embedded memory for faster inference
  • Nvidia responded by licensing Groq’s technology for $20 billion and hiring away Groq’s chip designers to strengthen its inference capabilities
  • Both CEOs publicly downplayed tensions, with Sam Altman calling Nvidia’s chips “the best in the world” and Jensen Huang dismissing reports as “nonsense”

OpenAI has been looking for alternatives to some of Nvidia’s chips since last year. The ChatGPT maker needs different hardware for inference tasks, which is when AI models respond to user queries.

The company wants chips that can provide faster responses for specific problems. These include software development and AI systems communicating with other software.

OpenAI is seeking alternatives for about 10% of its future inference computing needs. Seven sources familiar with the matter confirmed the company’s dissatisfaction with Nvidia’s current hardware speed for certain tasks.

Delayed Investment Deal

Nvidia announced plans in September to invest up to $100 billion in OpenAI. The deal was supposed to close within weeks but has dragged on for months.


NVDA Stock Card
NVIDIA Corporation, NVDA

OpenAI’s changing product roadmap has altered its computational requirements. This shift has complicated the ongoing negotiations with Nvidia.

During this period, OpenAI signed deals with AMD, Broadcom, and Cerebras Systems. These companies provide chips designed to compete with Nvidia’s offerings.

The issue became particularly visible in OpenAI’s Codex product for creating computer code. Staff members attributed some of Codex’s performance issues to Nvidia’s GPU-based hardware.

On January 30, Sam Altman told reporters that coding model customers value speed. He said OpenAI would meet this demand partly through its recent deal with Cerebras.

Technical Requirements

OpenAI has focused on companies building chips with large amounts of SRAM memory. SRAM is embedded in the same piece of silicon as the rest of the chip.

This design offers speed advantages for chatbots and AI systems serving millions of users. Inference requires more memory than training because chips spend more time fetching data from memory.

Nvidia and AMD GPU technology relies on external memory. This setup adds processing time and slows chatbot response speeds.

Competing products like Anthropic’s Claude and Google’s Gemini use different hardware. They rely more heavily on Google’s tensor processing units, which are designed for inference calculations.

OpenAI discussed working with startups Cerebras and Groq for faster inference chips. However, Nvidia struck a $20 billion licensing deal with Groq that ended OpenAI’s talks with the company.

Nvidia also hired away Groq’s chip designers as part of the agreement. Groq had been in talks with OpenAI and received investor interest at a $14 billion valuation.

Public Statements

Nvidia stated that customers continue to choose its chips for inference because of performance and cost effectiveness. The company said Groq’s intellectual property was highly complementary to its product roadmap.

An OpenAI spokesperson said the company relies on Nvidia to power most of its inference fleet. The spokesperson added that Nvidia delivers the best performance per dollar for inference.

OpenAI infrastructure executive Sachin Katti posted on Monday that the company was “anchoring on Nvidia as the core of our training and inference.” Both companies emphasized their ongoing partnership despite the reported issues.

The post OpenAI Seeks Nvidia Chip Alternatives as $100 Billion Investment Deal Stalls appeared first on CoinCentral.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags: