AnsweredAssumed Answered

AMD buffer binding bug?

Question asked by danjtg on Mar 4, 2014
Latest reply on May 4, 2014 by stefthedrummer

Hi.

 

I've been working on a project and I am stuck with some strange behavior.

 

I have some shaders that generate a fragment list and store it inside a SSBO:

 

struct FragmentData
  {
       GLuint position;
       GLuint color;
  };

layout(std430, binding = 1) buffer Fragments
{
  FragmentData fragmentList[];
};

 

With this fragment list, i want to create an octree. This octree is stored in another SSBO. I will do this level by level, and so for now I have two different programs:

  • one receives the fragment list and tags the nodes for subdivision
layout(std430, binding = 1) buffer Fragments
{
  FragmentData fragmentList[];
};

layout(std430, binding = 2) buffer Octree
{
  uint octree[];
};
  • the second receives the octree, finds the nodes tagged for subdivision and writes the address of the child nodes. the octree is defined exactly as above.

 

In the OpenGL side of the app, I create the shaders, set their bindings, etc.

glGenBuffers(1, &fragmentList);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, fragmentList);
glBufferData(GL_SHADER_STORAGE_BUFFER, SIZE*FRAGMENT_SIZE, NULL, GL_DYNAMIC_DRAW);
glClearBufferData(GL_SHADER_STORAGE_BUFFER, GL_R32UI, GL_RED, GL_UNSIGNED_INT, &zero);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 1, fragmentList);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);

 

In the render function, I generate the fragment list without problems, but then when creating the octree:

for (int i = 0; i < depth; i++)
{
     // Tag Octree
     glUseProgram(octreeTag.getProgramIndex());
     glDrawArrays(GL_POINTS, 0, numFragments);
     glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT);

     // Subdivide Octree
     glUseProgram(octreeSubdivide.getProgramIndex());
     glDrawArrays(GL_POINTS, 0, (numNodes*8)+1);
     glMemoryBarrier(GL_ATOMIC_COUNTER_BARRIER_BIT | GL_SHADER_STORAGE_BARRIER_BIT);
}

 

The problem is that nothing gets written in the octree. If I add a glBindBufferBase before tagging, the octree gets tagged, but not subdivided. If I add a glBindBufferBase before subdividing too, the octree gets tagged, but it only subdivides it if the index is different than 2 (so it is different than in the first shader, even though the index is 2 in the second shader too) and it only does it right in the first iteration. After that, when trying to read the fragment list, it will read data from the octree.

I think of the binding indexes as "ports" where the buffers are linked so that the shaders access the buffers correctly. But in this case, it works partially when the indexes are different in OpenGL and GLSL, and it does not work when the indexes are correct.

Is this a bug or am I doing something wrong?

Outcomes