08/10/2009 Rasterization on Larrabee

When it comes to Intel graphics, I never feel like using ecstatic words because how painful working on Intel graphics chips is. (Especially with OpenGL!) Nevertheless, I can't deny my interest for Intel project:Larrabee. I don't think Larrabee is the right way to go and I don't think Intel is going to do it right. (Even what they expect to do).

Larrabee is a platform following a convergent general idea: More and more programmable chips and actually make everything programmable! The recent presentation of Tim Sweeney at HPG 2009 is a good representation of this trend.

A week ago an excellent article about"Rasterisation on Larrabee" has been posted on DevMaster.net. The article is really detailed and explains how rasterization could be optimized and even with the case of multisampling. The method is a tiled based rendering like inPowerVR GPUs but quite less advanced. This article raise me 2 comments.

First, a new possible bottleneck with Direct3D 11 GPUs will probably around the setup engine where the rasterization is performed. So far, it's one of the only components on nVidia and AMD chips that have never become massively parallel. With the raise of tessellation, the amount of triangles to rasterize could be a lot higher! AMD with their Radeon 58** seems to tried something (some kind of dual setup engine) but according some reviews, they didn't get it right. I completely believe that tiled engines are the way to go: Potentially massively parallel rasterization! However, getting it right is hard; it really is a much more complicated hardware component.

Second, I'm quite astonished to see his article published on DevMaster... It's not the first time that Intel published something on a community website (previously on GameDev.net). A strategy for Larrabee adoption by developers?

G-Truc Creation 6.0 source code >
< CMake 2.8 rc2 released
Copyright Christophe Riccio 2002-2016 all rights reserved
Designed for Chrome 9, Firefox 4, Opera 11 and Safari 5