'm having another strange issue that seems to be related to GL_ARB_get_program_binary. The symptom that shows up is a GL_INVALID_OPERATION error after a glDrawElements call. The odd thing is that this error only occurs when attempting to use a sampler2D in a shader. Draw calls using shaders that only access positions and color vertex attributes for example, do not result in an error.
First off, here are my Catalyst stats:
Driver Packaging Version 8.821-110126a-112962C-ATI
Catalyst Version 11.2
Provider ATI Technologies Inc.
2D Driver Version 8.01.01.1123
2D Driver File Path /REGISTRY/MACHINE/SYSTEM/ControlSet001/Control/CLASS/{4D36E968-E325-11CE-BFC1-08002BE10318}/0002
Direct3D Version 7.14.10.0812
OpenGL Version 6.14.10.10524
Catalyst Control Center Version 2011.0126.1749.31909
Primary Adapter
Graphics Card Manufacturer Powered by AMD
Graphics Chipset AMD Radeon HD 6800 Series
Device ID 6738
Vendor 1002
Subsystem ID 2305
Subsystem Vendor ID 1787
Graphics Bus Capability PCI Express 2.0
Maximum Bus Setting PCI Express 2.0 x16
BIOS Version 013.009.000.001
BIOS Part Number 1134965469
BIOS Date 2010/11/03
Memory Size 1024 MB
Memory Type GDDR5
Core Clock in MHz 900 MHz
Memory Clock in MHz 1050 MHz
Total Memory Bandwidth in GByte/s 134.4 GByte/s
I am running in OpenGL with a 4.1 context with the forward compatible and debug bits set in the core profile. I have tried this also in a 3.3 context which similar results.
My application is drawing a full screen quad with a single texture on it. If I load the shader from a raw source string, the application will successfuly draw the texture to the screen. However, if I load the shader from a binary program created by my machine using the functionality in the GL_ARB_get_program_binary extension, I get a GL_INVALID_OPERATION error from the glDrawElements call for the full screen quad render.
This is the GLSL vertex shader I am using:
#version 150
uniform mat4 modelViewProjectionMatrix;
in vec3 in_position;
in vec2 in_texCoord0;
out vert2frag
{
out vec2 texCoord0;
} OUT;
void main()
{
OUT.texCoord0 = in_texCoord0;
gl_Position = modelViewProjectionMatrix * vec4(in_position.xyz,1.0);
}
// This is the GLSL fragment shader I am using:
#version 150
uniform sampler2D diffuseTexture;
in vert2frag
{
vec2 texCoord0;
} IN;
out vec4 out_fragColor0;
void main()
{
out_fragColor0 = texture2D(diffuseTexture, IN.texCoord0);
}
This is the basic flow I'm using when compiling a shader with the GL_ARB_get_program_binary extension:
1. Compile raw shader sources with glCompileShader
2. Setup all of my vertex attribute locations with glBindAttribLocation
3. Setup all of my fragment output locations with glBindFragDataLocation
4. Set the GL_PROGRAM_BINARY_RETRIEVABLE_HINT hint to true.
5. Link the shader.
6. Retrieve the shader with glGetProgramBinary
Then I load the compiled shader with:
glProgramBinary and the same binary format as was provided during the compilation.
I do not receive any gl error messages when loading the binary programs from disc. I've even gone so far as to do the compilation, save to a memory buffer, and then load directly from the memory buffer as a test to rule out any file handling miscalculations.
One thing to note is that if I avoid the texture sample in the shader, I do not receive the error from glDrawElements (and the screen output is what I would expect). In other words, I change the last line of the fragment shader to:
out_fragColor0 = vec4(IN.texCoord0.x, IN.texCoord0.x, 0, 1);
This is what leads me to believe it is a sampler state issue.
I have enabled the functionality provided with the GL_AMD_debug_output extension to try to get some sort of clarification on the error. This is the information that the debug callback proc provides:
ID: 1000
Category: API
Severity: HIGH
Message: glDrawElements has generated an error (GL_INVALID_OPERATION)
(as a side note, not the most helpful error message...)
Addtionaly, I have ensured that I am meeting all of the requirements of the spec associated with that error. Namely, I'm not using geometry shaders and none of my buffer objects are currently mapped.
I would like to create a simple test program to provide to you, but it could take a bit of work to do so. I'm hoping that the ID provided by that callback would mean something to the driver developers and I could perhaps get a bit more context as to what the error is.
Thank you for any insight you might be able to provide.
We will take a look at it first. It's better that you could create a simple program and send it to me by frank.li@amd.com.
Thanks
OK, as I'm boiling down my project into an test example for you, I think I've run into the source of the issue. It doesn't quite make sense to me, but maybe it will mean something to you.
When we compile our shaders we generally break our sources into 3 parts:
1. The version identifier
2. The compile time defines for permutations.
3. The actual source.
Instead of packing this into a single string and providing it to glShaderSource we keep it as three parts, ie:
char const * skVersion = "#version 150\n";
char const * skDefine = "#define USE_DIFFUSE\n"
GLchar const * sourceList[3];
sourceList[0] = skVersion;
sourceList[1] = skDefine;
sourceList[2] = shaderSource;
glShaderSource( shaderId, 3, sourceList, NULL );
I've found if I just instead pack the version and defines into a single string, ie:
sourceList[0] = skVersion + skDefine + shaderSource;
glShaderSource( shaderId, 1, sourceList, NULL );
that the error no longer occurs in glDrawElements and the texture renders as I would expect.
I'll continue to boil this down into a test project, but I thought I should give you a heads up on that.
Frank, I sent you a test program to the email you supplied. I look forward to hearing back from you!
It's a driver bug with multi string. Thanks for your feedback. We will fix it soon.
I am definately not an expert however could it have anything to do with "F.I.F.O" methods that may deal with a memory conflict? (First In First Out) Also I have noticed that microsoft cpp is also very different than ANSI cpp just a suggestion to jog some ideas is all this is for.
good luck!