Image courtesy of Gizmodo.com
There’s a storm brewing the graphics card industry, and it’s name is Larabee. Intel is positioning itself with Larabee to take on nVidia, which is quite possibly the only other industry in the computer hardware business to do as well as Intel. Both companies are the dominant force in their respective industries. In the first quarter of 2008, Intel accounted for 79.7% of global microprocessor revenue, vs. AMD’s 13%.
The competition between AMD’s graphics division (formerly ATI) and nVidia, however, is much closer in terms of market share. AMD/ATI has been able to compete with nVidia through aggressive pricing, but nVidia has been getting the upper hand both in technology and sales. So while AMD/ATI is certainly a major player, it’s pre-existing involvement with Intel (in the microprocessor market) has already created a competition that is decades old.
What’s new is the possibility of a conflict between nVidia and Intel. nVidia faces the Intel juggernaut, a microprocessor company with more resources than nVidia and AMD/ATI combined. Jumping into a whole new market dominated by specialist companies that have been in heated competition with each other for a decade (?) is close to suicidal. But if there’s one company with the technological resources, cash reserves, and product execution prowess (Core series, anyone?), it’s Intel.
What makes Larabee so fascinating is that Intel is proposing a change in the graphics card industry that could be a complete paradigm shift. Ever since the graphics industry got established as a mainstream computer component, it has taken the approach that the hardware should be taylored to the purpose. In other words, design the GPU so that the circuits (pipelines, in microprocessor jargon) are optimized/specialized to compute graphics-type calculations. Vectors galore!
Intel is taking the exact opposite approach. Rather than designing a processor to do just a few types of calculations really well, Intel is designing a processor that will be closer to a general-purpose CPU than a dedicated GPU. In fact, the architecture of Larabee is made out of scaled-down and optimized versions of the good ol’ Pentium Pro (the first of Intel’s processors to bear the familiar Pentium moniker). What nVidia and AMD/ATI (as well as Intel’s rather measely graphics division) have done is design processors to specifically compute vectors and other graphics-related calculations. Intel is making a (massively parallel) version of its general-purpose processors.
There’s the HUGE difference between these two approaches. What we’ll term “old-school” designs by nVidia and AMD/ATI use hardware architecture to do all the hard work. Instructions (i.e. code for how the GPU should compute something) have to be written specifically for the hardware. The two main standards in this field have been OpenGL and Microsoft’s proprietary DirectX. These two standards are hardware dependent. In other words, specific code is needed to run on customized hardware. Very expensive and very particular (as far as coding is concerned), but very efficient and powerful.
Intel, on the other hand, is trying to achieve the same power through “new-school” designs; software, not hardware. The hardware is just a bunch of vanilla (general-purpose) processors linked together to run parallel computations. This doesn’t exactly lend itself to being more efficient per se than an “old-school” approach, but removes the time-consuming hardware customization, and frees developers from having to use only API’s (Application Programming Interface) like OpenGL and DirectX.
This last point is the biggest gamble on Intel’s part. Sure, the hardware could fail or not be even close to nVidia’s or AMD/ATI’s powerful graphics cards. But since we’re talking about Intel here, they have the reserves to hold out for years until they can become competitive in the graphics market. Intel’s hardware engineers are also some of the best in the world, so from the hardware perspective, they have a lot going for them. The part that could really trip Intel up with Larabee is the game/program designers. They are the ones that will help/hinder Intel by either adopting the “new-school” software-centered paradigm or sticking to proven “old-school” hardware-centered technology.
Game/program developers have a lot of experience and familiarity with OpenGL and DirectX. Phenomenally-complex and visually-stunning games like Crysis have been developed for the hardware solutions of nVidia and AMD/ATI using the two main API’s (OpenGL and DirectX). To abandon this environment for a software-centered one is to ask game/program developers to step into a Brave New World of coding. They gain tremendous coding freedom by being able to write their own API’s that will work on any Larabee-esque GPU, but they also lose the powerful API’s they’ve become proficient at using.
With the introduction of Larabee, we might be witnessing the emergence of a whole new era in how computers function (on a hardware level). Until now, graphics and other computing tasks have been largely separate domains (at least in terms of hardware manufacturers). If Larabee is a success, we could quickly see a switch to brute CPU/GPU power, instead of a split. This means that any calculation (graphics or otherwise) presented to a computer can be tackled by the combined power of the GPU and CPU. Currently, one processor usually sits on the sideline as the other does the work (depending on the type of computation).
Time will tell, but this could be a huge change in how graphics and processing in general are performed. Exciting, isn’t it? Ok, well maybe only for geeks like me. 🙂