Hello there,
I think I discovered a bug in AMD's implementation of GLSL's texelFetch(). I'll try to explain briefly what I am doing and will add only a few code snippets:
1. Create a MSAA FBO A with a MSAA color and depth stencil texture attached and render the scene to this FBO A:
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_TEXTURE_2D_MULTISAMPLE, DepthTexture->Handle, 0);
2. Bind the depth texture from FBO A to texture unit 0 and execute a shader which uses the texture to render to another FBO B (for example a depth buffer linearization):
#version 150 core
uniform sampler2DMS u_Texture0;
...
void main(void)
{
float Z = texelFetch(u_Texture0, ivec2(gl_FragCoord.xy), 0).x;
float Z_Linear = 2.0 * u_ZNear / (u_ZFar + u_ZNear - Z * (u_ZFar - u_ZNear));
f_Color = vec4(Z_Linear, Z_Linear, Z_Linear, 1.0);}
3. Activate another FBO C and another shader which has a MSAA color texture bound to texture unit 0 and the same MSAA depth texture from step 1 bound to texture unit 1:
#version 150 core
uniform sampler2DMS u_Texture0;
uniform sampler2DMS u_Texture1;
...
void main(void)
{
f_Color = texelFetch(u_Texture0, ivec2(gl_FragCoord.xy), 0);
float Z = texelFetch(u_Texture1, ivec2(gl_FragCoord.xy), 0).x;...
}
Now what happens is, that the texelFetch() in step 3 returns bulls**t. But if I reverse the order in step 3 and bind the depth texture to texture unit 0 and the color texture to texture unit 1 while fetching the color from u_Texture1 and the depth value from u_Texture0, then everything is cool. In this case the textelFetch returns correct values.
So I really have the feeling there is a bug in the driver? For some reason when you attach the depth stencil texture to the second unit while it was previously bound to the first unit you get this effect.
Help is really appreciated!
SOFTWARE:
Driver Packaging Version 8.92-111109a-129966C-ATI
Catalyst Version 11.12
2D Driver Version 8.01.01.1215
OpenGL Version 6.14.10.11318
Catalyst Control Center Version 2011.1109.2212.39826HARDWARE
Graphics Card Manufacturer Powered by AMD
Graphics Chipset ATI Radeon HD 5800 Series
Device ID 6899
Vendor 1002
Subsystem ID E140
Subsystem Vendor ID 174B
Graphics Bus Capability PCI Express 2.0
Maximum Bus Setting PCI Express 2.0 x16
BIOS Version 012.019.000.002
BIOS Part Number 113-C00140-00X
BIOS Date 2010/03/09
Memory Size 1024 MB
Memory Type GDDR5
Core Clock in MHz 725 MHz
Memory Clock in MHz 1000 MHz
Total Memory Bandwidth in GByte/s 128,0 GByte/s
OS: Windows 7 (64 bit) with all updates
Ideas anyone?
Hi,
That sounds like a bug we squashed recently which sometimes left stale data in a texture that had been previously bound to an FBO, depending on the particular order in which things were bound. The fix should hit a regular driver release soon. In the meantime, if you can provide us with access to your application we can take a look at it and see if we can figure out what the problem is.
Cheers,
Sorry for the late reply. The issue still exist, was it supposed to be fixed?
I could try to provide more source. Maybe I can also provide a tiny sample application but that would need time.
Driver Packaging Version 8.95-120214a-134393C-ATI
Catalyst Version 12.3
Provider Advanced Micro Devices, Inc.
2D Driver Version 8.01.01.1235
2D Driver File Path /REGISTRY/MACHINE/SYSTEM/ControlSet001/Control/CLASS/{4D36E968-E325-11CE-BFC1-08002BE10318}/0000
Direct3D Version 7.14.10.0894
OpenGL Version 6.14.10.11554
Catalyst Control Center Version 2012.0214.2218.39913
I attached two screenshots, the top left quarter is my debug output:
"wrong.jpg" shows the issue I was talking about (strange artifacts)
"correct.jpg" shows how it should look like
As I mentioned it looks correct as in "correct.jpg" when I swap the color/depth texture input in the C++ code and then in the shader swap the uniform sampler input for texelFetch.
hi, xtremer
Thanks for your reply. Please try to attach your tiny sample application, which will be really handy for us to identify where the problem lies in.