OpenAI is strengthening its AI infrastructure muscle by hiring Sachin Katti, one of Intel’s top technology leaders, to head its compute division. The move signals OpenAI’s intent to double down on hardware optimisation, data centre expansion, and large-scale compute efficiency—all critical for powering its next generation of AI models and future artificial general intelligence (AGI) systems.

Confirming the development, Greg Brockman, OpenAI’s President and Co-founder, said on X,

“Incredibly excited to work with Sachin Katti on designing and building our compute infrastructure, which will power our AGI research and scale its applications to benefit everyone.”

This appointment marks a major strategic shift for OpenAI, which has rapidly grown from a model research lab to a hardware-intensive AI company, competing with industry giants like Google DeepMind, Anthropic, and Amazon in the race for compute dominance.

Katti’s Journey: From Academia to AI Infrastructure Leadership
Before joining OpenAI, Sachin Katti served as Senior Vice President and General Manager of Intel’s Network and Edge Group (NEX), where he led the company’s AI and networking innovations. Over the years, he also shaped Intel’s long-term AI strategy, influencing research directions within Intel Labs and driving partnerships with global startups.

Katti’s technical background runs deep. Before Intel, he was a professor at Stanford University, specialising in wireless communication, networking, and coding theory. His academic contributions earned him accolades such as the ACM Doctoral Dissertation Award (honourable mention) and the IEEE William Bennett Prize.

He also co-founded two successful startups — Kumu Networks, which pioneered self-interference cancellation, and Uhana, an AI-based network optimisation company acquired by VMware.

With his blend of academic innovation and commercial-scale systems experience, Katti is seen as the perfect choice to architect OpenAI’s compute backbone, bridging the worlds of research, infrastructure, and product delivery.

Intel’s Leadership Shift After Katti’s Departure
Following Katti’s exit, Intel announced that its CEO, Lip-Bu Tan, will directly oversee the AI and Advanced Technologies Group, ensuring continuity across its AI initiatives.

Intel issued a statement thanking Katti for his leadership, adding,

“Lip-Bu will lead the AI and Advanced Technologies Groups, working closely with the teams to continue delivering innovation.”

This transition comes as Intel faces mounting competition from NVIDIA, AMD, and custom chip developers aligned with hyperscalers like Google Cloud, Microsoft Azure, and Amazon Web Services. The company is now focused on accelerating chip production and AI hardware design to remain relevant in the new era of AI-driven computing.

Strengthening OpenAI’s Compute Vision
OpenAI’s decision to bring in a leader of Katti’s calibre reflects a clear strategic focus — AI innovation now depends as much on hardware orchestration as it does on model design.

As OpenAI expands its compute requirements for training and inference, the company is expected to build next-gen data centre clusters, leveraging both in-house systems and partnerships with major cloud providers like Microsoft Azure.

Katti’s deep understanding of edge systems, AI hardware pipelines, and network optimisation could prove vital in lowering latency, boosting throughput, and optimising costs — key priorities as OpenAI scales GPT-based and multimodal AI systems globally.

Bridging Compute, AI, and Research at Scale
Katti’s appointment also highlights a larger trend — top-tier chip engineers and AI architects migrating toward AI-first organisations that want greater control over compute design.

As AI models approach trillion-parameter scales, compute efficiency becomes the new competitive frontier. OpenAI’s focus on co-designing hardware and model architectures positions it closer to vertical integration, similar to Apple’s control over silicon design for performance gains.

This approach can also help OpenAI reduce dependency on external vendors, optimise cloud costs, and fine-tune its data centre strategies for energy efficiency — a growing concern amid AI’s massive carbon footprint.

A Strategic Move Amid the Global Compute Race
Katti’s addition to the OpenAI leadership team comes as tech giants battle for compute supremacy. From Google’s Tensor Processing Units (TPUs) to Amazon’s Trainium chips, and Microsoft’s AI server clusters, the race to build cost-efficient, scalable, and sustainable compute systems is at an all-time high.

For OpenAI, the hire of Katti is a signal of intent — it wants to control its compute destiny. The company’s ambitions go beyond software models to building a vertically integrated ecosystem, where data, models, and hardware work seamlessly together.

As OpenAI moves closer to AGI-scale development, it needs a compute foundation that can support vast model training workloads without external bottlenecks. Katti’s task will be to design that infrastructure — resilient, efficient, and ready for exponential AI growth.

Follow Tech Moves on Instagram and Facebook for the latest AI, innovation, and tech leadership updates.