OpenAI has reportedly begun using Google’s artificial intelligence chips to power ChatGPT and other AI services, marking a significant shift in its approach to cloud infrastructure and computing resources. This move comes as the company searches for cost effective alternatives to support its growing computational needs while continuing to deliver high performance AI models to millions of users worldwide.
Until now, OpenAI has largely relied on Nvidia’s high end GPUs, which are considered the industry standard for training and running large language models. These chips have powered both the learning and inference processes behind tools like ChatGPT. Inference refers to the phase when an AI model, already trained on vast amounts of data, applies its knowledge to respond to new queries or make decisions in real time. Nvidia's chips have been widely recognized for their performance, but they come with a high price tag and growing supply chain pressures.
According to recent reports, OpenAI is now renting Google’s tensor processing units through Google Cloud, a shift that represents its first meaningful use of non Nvidia hardware. These TPUs have traditionally been used by Google for its internal AI operations, but the company has started making them available to select external clients. Google has already attracted major names including Apple, Anthropic, and other AI startups that are direct competitors of OpenAI.
This partnership marks an unexpected collaboration between two fierce rivals in the AI space. OpenAI, backed by Microsoft, and Google have often been seen as leading contenders in the race to dominate artificial intelligence. But the practical needs of AI infrastructure are evolving rapidly, and OpenAI appears to be prioritizing efficiency and scalability over exclusivity.
OpenAI's move to diversify its hardware reliance is also seen as a strategic decision to reduce costs. Inference computing at scale is an expensive operation, and TPUs may offer a cheaper yet efficient alternative to Nvidia’s GPU systems. However, Google is reportedly not offering its most advanced TPUs to OpenAI, possibly retaining the edge for its own AI services.
This development also reflects how Google is using its proprietary AI stack to boost its cloud business. By offering both hardware and software solutions tailored for artificial intelligence workloads, Google Cloud has started attracting high profile customers who were previously dependent on other providers. The inclusion of OpenAI, even in a limited capacity, signals a broader transformation in how cloud providers and AI firms are choosing to collaborate.
As OpenAI scales its products and continues to pursue its mission of artificial general intelligence, the hardware behind the scenes will play a crucial role. Whether this experiment with TPUs remains a short term arrangement or evolves into a longer term shift, it is clear that the battle for AI computing supremacy is no longer just about algorithms but also about chips and clouds.
Follow Tech Moves on Instagram and Facebook to stay ahead on everything AI, gadgets, and the tech that will shape tomorrow.