cancel
Showing results for 
Search instead for 
Did you mean: 

Archives Discussions

adik
Journeyman III

GLSL compute shader incompatibilities

Hello,

in my project, I use GLSL compute shaders (version 430) and I have two problems with the application when running on AMD Radeon R9 270X graphics card. I tried with Catalyst 14.12 and 15.3 drivers. Both versions surffered from both problems.

First, I store my data in SSBOs (Shader Storage Buffer Object) and before the computation I clear some of them using glClearBufferData function. One of the buffers stores a hashing structure, where each entry takes 3 uints (12 bytes), and I use the std430 layout when writting and reading the data, so that the entries are tighly packed. The problem is when I want to clear the buffer as mentioned. I supply the glClearBufferData function with arguments for 3-channel data (internalformat = GL_RGB32UI, format = GL_RGB_INTEGER, type = GL_UNSIGNED_INT) and I got a GL_INVALID_VALUE error. An array of three ints as clear value is also supplied.

Second, in the hashing compute shader I use atomicCompSwap function, which does not behave on my AMD as it would follow the OpenGL documentation. The problem is with the order of the second and third parameter (compare and data). The function seems to work properly only when I swap the order of those two parameters, which is in confict with the OpenGL documentation and also with the ARB specification for SSBOs.

I have workarounds for both these problems, but any of them does not occur when I run the application on NVIDIA GTX 580 card.

Thanks for your interest,

Adam

Tags (3)
0 Likes
8 Replies
jtrudeau
Staff
Staff

Re: GLSL compute shader incompatibilities

Thanks for the report. I have passed that on to the engineers. If they have questions or suggestions, they'll jump into the thread.

0 Likes
guo
Staff
Staff

Re: GLSL compute shader incompatibilities

About the First, Have you tried without std430? Or to try std140.

If possible, please supply your GLSL shader source.

0 Likes
adik
Journeyman III

Re: Re: GLSL compute shader incompatibilities

As I said, I have workarounds. I am able to use the std430 layout in shaders while clearing the buffer with same default for the three (RGB) channels. The function glClearBufferData works without any problem when supplied with parameters GL_R32UI, GL_RED_INTEGER, GL_UNSIGNED_INT. Originally, I wanted to clear the buffer with different default values for each channel.

The GLSL code for the second case follows. I also tried with the default value 0 instead of 0xffffffff, but the result was the same.


struct entry {


    uint key;


    uint k;


    uint index;


};



layout(std430) buffer Hashes {


    entry hashes[];


};



uint hash(uint value, uint iteration, uint capacity) {


    value = ((value ^ 65521) + 2039) % 65537;


    return (value + iteration * iteration) % capacity; // quadratic probing


}



void main() {


   // writeHash is used


}



void writeHash(uint i, uint j, uint k, uint elemIdx) {


    uint key = i * maxCount + j; // compute 1D index of 2D data


    bool written = false;


    uint iteration = 0;


    uint maxIterations = min(maxHashIterations, 3 * maxNumTotalElements);


    while (!written && iteration < maxIterations) {


        uint index = hash(key, iteration, 3 * maxNumTotalElements);


        //uint oldKey = atomicCompSwap(hashes[index].key, INVALID_KEY, key); // NVIDIA follows OpenGL spec


        uint oldKey = atomicCompSwap(hashes[index].key, key, INVALID_KEY); // ATI atomicCompSwap BUG


        if (oldKey == INVALID_KEY) { // INVALID_KEY == 0xffffffff (default value)


            hashes[index].k = k;


            hashes[index].index = elemIdx;


            written = true;


        }


        iteration++;


    }


    if (!written) {


        atomicCounterIncrement(hashErrorCount); // count not hashed elements


    }


}



0 Likes
jtrudeau
Staff
Staff

Re: GLSL compute shader incompatibilities

FYI: The engineering team has opened up a defect report internally. In case we ever need to refer back to this, it is #416936.

0 Likes
guo
Staff
Staff

Re: GLSL compute shader incompatibilities

The issue of "atomicCompSwap" has been found in the driver. The second and third parameters are disordered. It may be fixed released in some days.

I have wrote these code to test glClearBufferData. It seems the buffer is cleared with 0. with no GL_INVALID_VALUE error.

    glGenBuffers(8, bo);

    glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, bo[0]);

    glBindBuffer(GL_SHADER_STORAGE_BUFFER, bo[0]);

    glBufferData(GL_SHADER_STORAGE_BUFFER, 32 * 16, &data[0], GL_DYNAMIC_COPY);

    glGetBufferSubData(GL_SHADER_STORAGE_BUFFER, 0, 32 * 16, &data[0]);

    CHECKGL("");

    glClearBufferSubData(GL_SHADER_STORAGE_BUFFER, GL_RGB32UI, 0, 0x100, GL_RGB_INTEGER, GL_UNSIGNED_INT, 0);

    CHECKGL("");

    glGetBufferSubData(GL_SHADER_STORAGE_BUFFER, 0, 32 * 16, &data[0]);

    CHECKGL("");

If you still have some problem please attach your application piece also.

0 Likes
adik
Journeyman III

Re: GLSL compute shader incompatibilities

Hi, thanks for the code. It worked for me also, so I experimented a little bit found that there is important the size of the buffer. When the size of the buffer is multiple of 16, then everything works. But in my case, I store an array where each element contains 12 bytes, so the total size of the buffer is multiple of 12. I tried with values 36 and 48.

0 Likes
guo
Staff
Staff

Re: GLSL compute shader incompatibilities

I tried with the size (such as 12), It is all OK.

Please be careful to check if the target, format or something other is wrong in you application, it should not be "size" problem.

The issue of "atomicCompSwap" will be fixed in next release.

Enjoy you journey to AMD opengl .

0 Likes
adik
Journeyman III

Re: Re: GLSL compute shader incompatibilities

I tried once again and found that it is more an issue of the size specified when clearing the buffer than when creating it. Here I paste your modified code which doesn't work for me.


unsigned int bo[8];


unsigned char data[32 * 16];


glGenBuffers(8, bo);


glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, bo[0]);


glBindBuffer(GL_SHADER_STORAGE_BUFFER, bo[0]);


glBufferData(GL_SHADER_STORAGE_BUFFER, 32 * 16, &data[0], GL_DYNAMIC_COPY);


glGetBufferSubData(GL_SHADER_STORAGE_BUFFER, 0, 12, &data[0]);


checkGLError("GetBufferSubData");


glClearBufferSubData(GL_SHADER_STORAGE_BUFFER, GL_RGB32UI, 0, 12, GL_RGB_INTEGER, GL_UNSIGNED_INT, 0); // Works when using size 16 instead of 12, but 3 * 4 = 12


checkGLError("ClearBufferSubData");


glGetBufferSubData(GL_SHADER_STORAGE_BUFFER, 0, 12, &data[0]);


checkGLError("GetBufferSubData");


The code reports INVALID_VALUE error after the ClearBufferSubData call.

Please, check my code again. If needed, I can send you a whole project for MSVC 2012 (using Freeglut and GLEW), it is only a simple test suite for the case.

Thank you in advance!

0 Likes