Showing results for 
Search instead for 
Did you mean: 

OpenGL & Vulkan

Journeyman III

Bug when transfering integer uniforms?


Here is a simple code (implementing a stipple pattern) which works "most of the time" but fails with certain data when running with AMD graphic driver.


The vertex shader:


#version 330

uniform vec3  in_Color;

uniform float in_Alpha;

uniform uint  in_Stipple[32];

uniform bool  in_StippleEn = false;

out vec4 out_Color;

void main(void)


   bool dropThePixel = false;

   if (in_StippleEn)


      uvec2 ufCoord = uvec2(gl_FragCoord.x, gl_FragCoord.y);

      uint index = 31u - (ufCoord.y % 32u);

      uint mask  = uint(0x80000000) >> (ufCoord.x % 32u);

      dropThePixel = !bool(in_Stipple[index] & mask);


   if (dropThePixel)



      out_Color = vec4(in_Color,in_Alpha);



The code to change in_Stipple variable:



   const byte* tellStipple;


   //... tellStipple takes 128 unsigned char values



      GLuint shdrStipple [32];

      for (unsigned i = 0; i < 32; i++)

         shdrStipple = ((GLuint)(tellStipple[4*i + 0]) << 8*3)

                        | ((GLuint)(tellStipple[4*i + 1]) << 8*2)

                        | ((GLuint)(tellStipple[4*i + 2]) << 8*1)

                        | ((GLuint)(tellStipple[4*i + 3]) << 8*0);

      glUniform1uiv(glslUniVarLoc[glslu_in_Stipple], 32, shdrStipple);



OK - I have to add that it all compiles without troubles... and works in most of the cases.

The trouble appears to be with quite particular combination of bits in a partucular location of the stipple array. That's shdrStipple[0], bit 31 and 30. So:

- if both bit 31 and 30 of shdrStipple[0] are NOT set - no problem - it all works with any values of the rest of the array

- if only  bit 31 and 30 of shdrStipple[0] are set and the rest of the array is 0 - no problem

The troubles are when those two bits are set simultaneously. In this case - all the rest of the array is compromised and of course the picture on the screen is quite different from the expected.

Of course it took me some nights to find this out. A simplest working workaround is to expand the array to 33 uints and to assign 0 to shdrStipple[0]. With the corresponding correction in the shader - it works fine.

The other workaround (very ugly!) is to convert the pattern to floats on both sides (the array has to be doubled in this case,because a float can't get uint without precision loss).

Both workarounds are not posted.

The question is - what's wrong with the code? The implression is that data is getting compromised somewhere in the pipe between the user software on the host and the shader. That is not my first problem with the way AMD is handling integers, so some comments will be highly appreciated.



P.S. Shall I add that the same code works fine on other platforms

Tags (1)
1 Reply

Re: Bug when transfering integer uniforms?


Thanks for the in depth description of the problem. As you said, it all looks right and the actual values of the uniforms shouldn't affect the correct functioning of the program. However, without being able to actually debug things, any answers we could give would be a total guess. Here are a few things you could try that might give us more information;

  • Explicitly set the value of each uniform one at a time with a call to glUniform1ui.
  • Put the data in a uniform block and set it explicitly by mapping the UBO and writing to it.
  • Hard code the array into a constant in the shader to verify that it would work that way.

If you want to try this yourself, please let us know the results of your experiment. If you can share your application (just a binary would be fine), we'll try to debug it here.