We at PC Perspective are about as familiar with the upcoming Intel Larrabee technology as anyone outside of Intel can be; we have covered it at various stages of the development process since early 2007. For those of you unfamiliar, Larrabee is a future graphics technology from Intel based around a many-core x86 architecture. Our coverage thus far:
But enough on the past, let's look into the future. It looks like Intel will indeed be sharing more information about the Larrabee architecture and how it will handle rasterization. Rasterization is important to discuss simply because if Intel has any plans to make headway into the existing gaming or professional markets, support for that rendering method is a MUST. And because the architecture is so far removed from the designs we are used to seeing in GPUs from AMD and NVIDIA, there are doubts as to how Larrabee will perform out of the gate. Even John Carmack in our interview from last year
noted that Larrabee had to be a good rasterizer if had any hope of survival.
At the Game Developers Conference in San Francisco this month, a couple of talks from Intel are apparently going to shed light on this very issue. And just from the session descriptions we have seen, the information looks to be, well, informational. First off, we know that the new vector instruction-set extension Intel has created for Larrabee is known as "Larrabee new instructions" aka "LRBni".
Larrabee up close
LRBni will include features like 16-wide SIMD, multiply-add, ternary instructions, predication, built-in data-format conversion and gather/scatter support. The first session will then look at how this instruction set will be applied to the "not obviously vectorizable" problem of rasterization. This is a key message for Intel to get across to game developers as it will potentially show the flexibility that Larrabee and the new instruction set have to offer for problems that are not "obviously shader-like."
A second session at GDC will look at the basics of working on Larrabee by demonstrating simple math and how to vectorize it, loops, conditionals, complex flow control on the new LRBni instruction set. It will also look at data formats for Larrabee, when to use Structures of Arrays (SOA) or Arrays of Structures (AOS) and how to use gather/scatter efficiently with the same data structures used in existing game engines. That final point is the most important one for Intel: they need to teach developers how to take advantage of the upcoming Larrabee hardware without dramatically changing current gaming engines or adoption will be much slower than Intel wants.
The takeaway from the two Intel sessions is the same: they hope to show developers how programming for Larrabee in their next gaming engine will open up possibilities for performance and feature enhancements:
The attendees will learn about the latest processor architecture from Intel, and the instruction set used to program it. Understanding how this architecture and instruction set works will give the attendee information on how to design the next iteration of their game engine, and the possibilities available when programming Larrabee natively.
Intel sees the end-game to all this GPGPU discussion, including things like OpenCL and CUDA, is that hardware like Larrabee, that is many-core x86 vector-enhanced architecture, will become the visual processing platform of the future. Whether or not they are correct will be determined as product based on Larrabee hits the streets in 2010.
Of course, the information that we are REALLY looking forward to getting from Intel about Larrabee will more than likely not be presented anytime soon. How many cores will Larrabee consist of when it's released? What kind of relative performance will Larrabee have in current-generation rasterization-based games on the PC? I don't think we'll be seeing that kind of detail until late in 2009, but you can be sure we'll be on the lookout should information appear anywhere.