, and I applaud Jensen for his tenacity of just saying, ‘No, I am not trying to build one of those; I am trying to deliver against the workload starting in graphics,” said Gelsinger. “You know, it became this broader view. And then he got lucky with AI, and one time I was debating with him, he said, ‘No, I got really lucky with AI workload because it just demanded that type of architecture.’ That is where the center of application development is [right now].
One of the reasons why Larrabee was canceled as a GPU in 2009 was that it was not competitive as a graphics processor against AMD’s and Nvidia’s graphics solutions at the time. To some extent, this was due to Intel’s desire for Larrabee to feature ultimate programmability, which led to its lack of crucial fixed-function GPU parts such as raster operations units. This affected performance and increased the complexity of software development.
“I had a project that was well known in the industry called Larrabee and which was trying to bridge the programmability of the CPU with a throughput oriented architecture [of a GPU], and I think had Intel stay on that path, you know, the future could have been different,” said Gelsinger during a webcast. “I give Jensen a lot of credit [as] he just stayed true to that throughput computing or accelerated [vision].”
Unlike GPUs from AMD and Nvidia, which use proprietary instruction set architectures (ISAs), Intel’s Larrabee used the x86 ISA with Larrabee-specific extensions. This provided an advantage for parallelized general-purpose computing workloads but was a disadvantage for graphics applications. As a result, Larrabee was reintroduced as the Xeon Phi processor, first aimed at supercomputing workloads in 2010. However, it gained little traction as traditional GPU architectures gained general-purpose computing capabilities via the CUDA framework, as well as the OpenCL/Vulkan and DirectCompute APIs, which were easier to scale in terms of performance. After the Xeon Phi ‘Knights Mill’ failed to meet expectations, Intel dropped the Xeon Phi project in favor of data center GPUs for HPC and specialized ASICs for AI between 2018 and 2019.
To a large extent, Larrabee and its successors in the Xeon Phi line failed because they were based on a CPU ISA that did not scale well for graphics, AI, or HPC. Larrabee’s failure was set in motion in the mid-2000s when CPUs were still dominant, and Intel’s technical leads thought that x86 was a way to go. Fast forward to today, and Intel’s attempts at adopting a more conventional GPU design for AI have largely failed, with the company recently canceling its Falcon Shores GPUs for data centers. Instead, the company is pinning its hopes on its next-gen Jaguar Shores that isn’t slated for release until next year.