Back to Blog

Google and Intel Expand AI Infrastructure Partnership

Google and Intel Expand AI Infrastructure Partnership Google and Intel Expand AI Infrastructure Partnership Google and Intel Expand AI Infrastructure Partnership

Google and Intel Deepen AI Infrastructure Partnership

Google Cloud and Intel have announced an expanded multiyear partnership that will see Google Cloud continue to leverage Intel's AI infrastructure and collaborate on the development of processors.

Key Aspects of the Partnership:

  • Continued Use of Intel Xeon Processors: Google Cloud will utilize Intel's Xeon processors, including the latest Xeon 6 chips, for its AI, cloud, and inference tasks. This builds on a long-standing relationship where Google has used Intel's Xeon processors for decades.
  • Expanded Co-Development of Custom IPUs: The partnership will extend the co-development of custom Infrastructure Processing Units (IPUs). These specialized chips are designed to accelerate and manage data center tasks by offloading them from central processing units (CPUs).
  • Focus on ASIC-Based IPUs: The chip development, which began in 2021, will now concentrate on custom Application-Specific Integrated Circuit (ASIC)-based IPUs.
  • Industry Demand for CPUs: This expansion occurs at a time when the industry is experiencing high demand for CPUs. While GPUs are essential for training AI models, CPUs are critical for running AI models and for general AI infrastructure.

Strategic Importance:

Intel CEO Lip-Bu Tan highlighted that scaling AI requires more than just accelerators, emphasizing the need for balanced systems. He stated, "CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand."

The increasing demand for CPUs has led other companies, such as Arm Holdings, to release their own in-house chips to address the global shortage.

Image: