cancel
Showing results for 
Search instead for 
Did you mean: 

Archives Discussions

danjtg
Journeyman III

AMD buffer binding bug?

Hi.

I've been working on a project and I am stuck with some strange behavior.

I have some shaders that generate a fragment list and store it inside a SSBO:


struct FragmentData


  {


       GLuint position;


       GLuint color;


  };



layout(std430, binding = 1) buffer Fragments


{


  FragmentData fragmentList[];


};


With this fragment list, i want to create an octree. This octree is stored in another SSBO. I will do this level by level, and so for now I have two different programs:

  • one receives the fragment list and tags the nodes for subdivision

layout(std430, binding = 1) buffer Fragments


{


  FragmentData fragmentList[];


};



layout(std430, binding = 2) buffer Octree


{


  uint octree[];


};


  • the second receives the octree, finds the nodes tagged for subdivision and writes the address of the child nodes. the octree is defined exactly as above.

In the OpenGL side of the app, I create the shaders, set their bindings, etc.


glGenBuffers(1, &fragmentList);


glBindBuffer(GL_SHADER_STORAGE_BUFFER, fragmentList);


glBufferData(GL_SHADER_STORAGE_BUFFER, SIZE*FRAGMENT_SIZE, NULL, GL_DYNAMIC_DRAW);


glClearBufferData(GL_SHADER_STORAGE_BUFFER, GL_R32UI, GL_RED, GL_UNSIGNED_INT, &zero);


glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 1, fragmentList);


glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);


In the render function, I generate the fragment list without problems, but then when creating the octree:


for (int i = 0; i < depth; i++)


{


     // Tag Octree


     glUseProgram(octreeTag.getProgramIndex());


     glDrawArrays(GL_POINTS, 0, numFragments);


     glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT);



     // Subdivide Octree


     glUseProgram(octreeSubdivide.getProgramIndex());


     glDrawArrays(GL_POINTS, 0, (numNodes*8)+1);


     glMemoryBarrier(GL_ATOMIC_COUNTER_BARRIER_BIT | GL_SHADER_STORAGE_BARRIER_BIT);


}



The problem is that nothing gets written in the octree. If I add a glBindBufferBase before tagging, the octree gets tagged, but not subdivided. If I add a glBindBufferBase before subdividing too, the octree gets tagged, but it only subdivides it if the index is different than 2 (so it is different than in the first shader, even though the index is 2 in the second shader too) and it only does it right in the first iteration. After that, when trying to read the fragment list, it will read data from the octree.

I think of the binding indexes as "ports" where the buffers are linked so that the shaders access the buffers correctly. But in this case, it works partially when the indexes are different in OpenGL and GLSL, and it does not work when the indexes are correct.

Is this a bug or am I doing something wrong?

0 Likes
3 Replies
danjtg
Journeyman III

just a little update on tests I have been doing.

So I tested this on a Nvidia GPU and it had the same problem. With some help, I found it was because I was missing a VAO, even though I am using attribute less rendering and I thought VAOs only matter for the GL_VERTEX_ARRAY target.

After adding that, the code worked without problems on the Nvidia card (it even solved some padding with the structures that shouldn't exist since I'm using std430), but the problem persists on the AMD card. I used glGetIntegeri_v with GL_SHADER_STORAGE_BUFFER_BINDING and confirmed that the bindings are correct.

I know Nvidia drivers tend to be more permissive and AMD strictly follow the standards, so any suggestion on something I might still be missing to make it work?

0 Likes

I faced wrong SSBO behaviour on AMD platform too.

But not sure is it binding problem or bug in reading operation from array of structures in shader.

0 Likes
stefthedrummer
Journeyman III

Same here.

It must be a bug concerning the shader storage buffers.

My shader storage buffer bindings stop working until I render something with a fragment shader before the actual dispathCompute. When I remove the draw-call or disable the fragmentshader (only vertex shader remains) it works.

This cost me 3 days 'till now to figure out. It's a shame ...

You can see my simplified code to get a picture:


InterfaceBlockBuffer sourceBuffer = new InterfaceBlockBuffer(resourcePool, "SourceBuffer", BUFFER_USAGE.DYNAMIC_DRAW);


InterfaceBlockBuffer destBuffer= new InterfaceBlockBuffer(resourcePool, "DestBuffer", BUFFER_USAGE.DYNAMIC_DRAW);


sourceBuffer.shaderStorageBlockBindingr(Bindings.BUF_SOURCE);


destBuffer.shaderStorageBlockBinding(Bindings.BUF_DEST);


sourceBuffer.bufferSubData(mySourceData);




// ***** If this is commented out - it works *****


Joogl.setProgram(renderProgram);


Joogl.setVertexArray(someVertexArray);


Joogl.draw(DRAW_MODE.TRIANGLE_FAN, 0, 4);


// ***********************************************




Joogl.setProgram(copyProgram); // Copy from sourceBuffer to destBuffer


Joogl.compute(MAX_FACES, 1, 1);




destBuffer.getBufferData(myDestData);


// print out myDestData ... on so on


//myDestData all zero when render call is beeing made


























I wrote some tests.When inserting the draw-call you cannot read OR write from a SSBO anymore.

0 Likes