cancel
Showing results for 
Search instead for 
Did you mean: 

Archives Discussions

Shrinker
Journeyman III

Uniform buffers and the std140 layout

Something's wrong after loading a binary program

Hey there. I have a problem that I will only describe and then try to answer questions because there is too much proprietary code attached to "quickly" recreate the issue.

I have a pretty complex shader in which I used uniforms for everything, including passing the modelview and the projection matrices. The shader is loaded and compiled by my program, and then the binary is saved to disk, as a cache for later (using the standardized binary program thingo from OpenGL 4 for which ATI driver support was only added recently).

Now I have changed it so that the two matrices are not passed as uniforms anymore, but through a uniform buffer. On my Nvidia system, this works fine, just like on the ATI system, but when restarting the application (so that it uses its binary shader cache file saved with standard OpenGL means), the ATI system behaves strange. The Nvidia system just displays everything as usual, but the ATI system seems to zero out things somehow. The uniform block has the std140 layout modifier set, which, according to the OpenGL specification, defines a fixed data layout you can depend on, with precise rules (all in the spec).

I have a hunch that this layout is disregarded when the shader is loaded from the binary cache. Could anyone with access to the driver codebase please check out on that? 😕

Note: All location values are confirmed valid, no invalid uniform locations are returned by any commands on the go.

 

Regards,

Shrinker

 

0 Likes
10 Replies
frali
Staff

We will verify the problem first. Does it happen with std140 only? Not happen with packed?

0 Likes

I can only confirm this with std140 right now, as testing this with packed is too much hassle for me at this point. One other thing I've noticed is that nothing was displayed in both cases (before and after using the binary cache) before I updated my driver to the most recent version. If there's been a known issue regarding something like that before, the issues are probably connected.

In order to tackle this problem and the other issue that I reported half a year ago (regarding the display without anything bound to attribute 0 and the misbehavior of gl_VertexID if there had been an attribute with an alphanumeric name before that), I've now changed my workflow so that besides a normal executable, I maintain one executable with workarounds for these bugs too =/

Here is a test application for a grid component I have developed for my level editor (no sources): http://shrinker.beyond-veils.de/temp11f/20110210sample.zip

The normal executable uses grid.shader and creates grid.shader.cache if that's supported by the system. grid.shader makes use of gl_VertexID.

The _AMD executable does not use the shader cache because of the problem outlined in this thread, and uses the int attribute _vertexID in grid_AMD.shader.

Press Esc to toggle fly mode, then use the mouse to look around and standard gaming controls to move. The mouse wheel and +/- adjust the flying speed. Various keys change various attributes in the shader and the cursor snap (undocumented, just for testing) -- Changing the origin does not work right now, but that's just because it's WIP. The most important keys are 6 (changes the plane setup, "other mode" is to display planes you look at) and 7 (changes the grid lines). Other notable keys are ERTZU for toggling the individual grid snap flags. O shows a skewed grid. When you move away from the world origin, the grid follows.

Screenshot:
#

 

Regards,

Shrinker

 

0 Likes

To the program binary issue,
I use a simple shader to see if uniform block with std140 works for program binary. It seems no problem. Below is some sample codes to set the uniform buffer.

    // get the uniform block index and size

    GLuint ubindex = glGetUniformBlockIndex(newProgram, "color");

    GLint bsize;

    glGetActiveUniformBlockiv(newProgram, ubindex, GL_UNIFORM_BLOCK_DATA_SIZE, &bsize);    

    // create the uniform buffer according to the uniform block size

    GLuint buffer = 0;

    glGenBuffers(1, &buffer);

    glBindBuffer(GL_UNIFORM_BUFFER, buffer);

    glBufferData(GL_UNIFORM_BUFFER, bsize, NULL, GL_STATIC_DRAW);

    glBindBuffer(GL_UNIFORM_BUFFER, 0);

    // get the uniform block member's offset and size

    GLuint redindex, greenindex;

    const char* redname = "red", *greenname = "green";

    glGetUniformIndices(newProgram, 1, &redname, &redindex);

    glGetUniformIndices(newProgram, 1, &greenname, &greenindex);

    GLint redoffset, greenoffset, redsize, greensize;

    glGetActiveUniformsiv(newProgram, 1, &redindex, GL_UNIFORM_OFFSET, &redoffset);

    glGetActiveUniformsiv(newProgram, 1, &greenindex, GL_UNIFORM_OFFSET, &greenoffset);

    glGetActiveUniformsiv(newProgram, 1, &redindex, GL_UNIFORM_SIZE, &redsize);

    glGetActiveUniformsiv(newProgram, 1, &greenindex, GL_UNIFORM_SIZE, &greensize);

    // write the two uniform block members

    const float red[] = {1.0, 0.0, 0.0, 1.0};

    const float green[] = {0.0, 0.0, 0.0, 0.0};

    glBindBuffer(GL_UNIFORM_BUFFER, buffer);

    // it's vec4 so the bytesize is 16

    glBufferSubData(GL_UNIFORM_BUFFER, redoffset, redsize * 16, red);

    glBindBuffer(GL_UNIFORM_BUFFER, buffer);

    glBufferSubData(GL_UNIFORM_BUFFER, greenoffset, greensize * 16, green);

    glUseProgram(newProgram);

    // uniform buffer bind

    // Assign uniform block to the first binding point

    const GLuint bindingPoint = 0;

    glUniformBlockBinding(newProgram, ubindex, bindingPoint);

    glBindBufferBase(GL_UNIFORM_BUFFER, bindingPoint, buffer);

 

Could you please check if there is some difference between your codes and mine? Thanks.

 

To the vertex attribute 0 issue,
We are still working on the bug in core profile that there is nothing drawn when attribute 0 is not bound. It's the root cause for your sample codes.

But for the gl_VertexID issue, the bug is that the result is wrong when "first" vertexID is not zero. It's actually fixed. If it's not fixed in your side, please tell me. Thanks 

 

0 Likes

Hey there,

 

my program neither has a call to glGetUniformIndices nor to glGetActiveUniformsiv for this particular case, because I interpreted the specification in that way that even the offsets of the uniforms within the block can be deduced from the shader code alone. Is that not correct?

Regarding the other issues: I will check that out as soon as I can, but can't right now, because I'm about to graduate from university in about a month

*puts that on his todo list*

 

Regards,

Shrinker

0 Likes

I'm not sure if your solution is correct or not. But my solution is a safe way to set each uniform block member. If you have problems, you could send the source code to me for investigation. Anyway, uniform block is more complex than general uniform.

0 Likes

Your solution is safe, that is right, but if the spec says that the whole memory layout can be deduced from std140 and the shader source code alone, then this should be respected, and it should be possible for me to assume and use this. And as I said, it _does_ work when the shader is not loaded from the cache.

Could you check out the spec one last time regarding this?

I can send you the sources for this thing, but need a proper contact address or private message function for that.

 

Regards,

Shrinker

0 Likes

the mail address is frank.li@amd.com

0 Likes

Mail sent.

0 Likes
larspensjö
Journeyman III

The post is a little old, but I think this is still relevant?

I have a problem with the same symptoms, but is does not depend on saving shaders as binaries. The Uniform buffer will become unbound from the shader now and then. So it looks as if glBindBufferBase disconnecs sometimes. If it fails, the data in the UBO accessed by the shader program will be 0.

I have a call to glDeleteBuffers where I delete buffer 0. When I do this, the binding above is lost. There shouldn't be any connection between these two things, but I can repeat it perfectly. Using a conditional statement to guard glDeleteBuffers from buffer 0 will remove the problem.

I have the problem on Ubuntu for Driver version is 8.96, and OpenGL version is 4.2.11631. It also happens when I test it with Windows 7 on the same computer.

Vendor: ATI Technologies Inc.

Renderer: AMD Radeon HD 6570

Version: 4.2.11627 Compatibility Profile/Debug Context

GLSL: 4.20

0 Likes

Hi,

Yes, this post is a bit old and it sounds like your problem might not be directly related. Unfortunately, Frank Li left AMD a while ago which is probably why the original thread died. It certainly sounds like you've found a bug. Deleting the buffer name zero is perfectly legal and should be silently ignored by OpenGL. It seems like it's triggering logic that unbinds buffers that are bound when they're deleted. We'll look into it and get a fix into a future driver.

Thanks,

Graham

0 Likes