Hello, I am a developer of graphics software for Windows applications. The application I am working on now uses a technique( radiosity ) that requires that I write a software rasterizer to determine light contribution for each individual texel within a lightmap( texture ). The problem I'm having is duplicating the technique used by AMD's hardware. I am using a Radeon HD 4550 graphics device.
The radiosity algorithm takes as input a group of orthogonal triangles. These triangles cover a certain area of a texture. Before the software rasterizer is implemented the hardware device is used to predetermine point, spot, and directional light contribution ( ie. the lightmap textures are drawn to by the hardware before the radiosity algorithm determines the transfer of light energy... using the hardware speeds up the process ). My task is to emulate the hardware's method of determining pixel coverage so that the radiosity algorithm covers the exact same pixels.
The results yielded by the radiosity algorithm, however, often have visible artifacts caused by overestimation/underestimation of pixel coverage...These artifacts can be removed by using a paint program but needless to say this is less than ideal. What I'd like to do is tighten up the software rasterization algorithm so that pixel coverage is better estimated.
How does AMD rasterize a triangle? I know this is a broad question but a link to documentation would help a lot. I've tried edge equations, integer rasterization, floating-point rasterization, snapping uv coordinates to subpixels( 1/256 )...but I've not yet been able to accurately emulate the coverage as does the hardware device...
I can include code if you like but general information will help.