13 Replies Latest reply on Aug 9, 2013 3:06 PM by maizensh

    OpenGL Compute Shader 13.4 Driver Crash/Restart Win7 64bit

    yours3lf

      Hi there,

       

      I've been trying to get OpenGL compute shaders to work using the new driver.

      I got mixed results: while I could run an application that writes to a Texture Buffer Object, I get a driver crash/restart when I try to run an application that writes to a simple Texture. I didn't find any clue regarding this in the OpenGL specifications, so this should probably work fine.

       

      here's the app:
      https://docs.google.com/file/d/0B33Sh832pOdOaWFlVS00N040bFE/edit?usp=sharing

       

      Putting a glFinish() after calling glDispatchCompute solved the driver crash, but I still don't get anything on screen. I can render the texture fine when not using compute shaders.

      I suspect this might be a syncronization issue in the driver, meaning the compute shader tries to write to the texture, and the next shader tries to read from it at the same time.

       

      Please take a look at this issue.

       

      Best regards,

      Yours3lf

        • Re: OpenGL Compute Shader 13.4 Driver Crash/Restart Win7 64bit
          sc4v

          Same issue here

          Using a HD 5850 with 13.4

          • Re: OpenGL Compute Shader 13.4 Driver Crash/Restart Win7 64bit
            dutta

            You never bind anything to the texture in the Compute Shader. The texture 'texture0' is not attached to any texture object. First, as you would with textures, you get the uniform location. Then, you must call glBindImageTexture on the texture. Example code:

            glUniform1i(glGetUniformLocation(program, 'texture0'), 0);
            glBindImageTexture(0, texture, 0, GL_FALSE, 0, GL_READ_WRITE, GL_RGBA8);
            

            The glUniform1i binds the variable to an arbitrary value. Then, you bind the texture to that value. In the texture case, we would use glActiveTexture() and then glBindTexture(), but since images are different, we use glBindImageTexture() instead. This should solve your crash .
            I know this because I myself had the same problem with the same example you are working on. Also, instead of using glFinish(), I would assume using glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT); does the same thing in your case, since you are writing directly to an image and nothing else. The glMemoryBarrier function ensures that execution for a specific feature set is done before continuing, and since you are using shader images, the GL_SHADER_IMAGE_ACCESS_BARRIER_BIT should suffice.

             

            Hope this solves the problem .

            1 of 1 people found this helpful
              • Re: OpenGL Compute Shader 13.4 Driver Crash/Restart Win7 64bit
                yours3lf

                hey there,

                 

                thank you for the reply!

                shortly after writing this question I found this glBindImage() function, but it still doesn't work with it. Neither with the corrections you suggested.

                I also found the barrier function, but it doesn't solve anything.

                 

                my hardware specs:
                AMD A8-4500m APU

                 

                Can you please take a look at it again? Here's the update project:

                opengl_compute_shader.7z - Google Drive

                  • Re: Re: OpenGL Compute Shader 13.4 Driver Crash/Restart Win7 64bit
                    dutta

                    Weird. You can also try to define the local_size_z in the shader to 1. I'm currently developing a middle-ware compiler which accepts another language and generates OpenGL code from it, and I managed to get this to work, however, I also got driver crashes whenever I tried using glDispatchCompute() where the dimensions where bigger than those defined in the shader. I don't know if the specification says if a local size is left undefined. So my compiler simply sets all local_size_x/y/z to 1 if they are not defined in the middle-ware code. So just try changing:

                     

                    layout(local_size_x = 16, local_size_y = 16) in; //local workgroup size
                    
                    
                    

                    To:

                    layout(local_size_x = 16, local_size_y = 16, local_size_z = 1) in; //local workgroup size
                    
                    
                    

                     

                    You might also want to bind the texture uniform in your geometry rendering shader. I can see you are doing:

                    glActiveTexture(GL_TEXTURE0);
                    glBindTexture(GL_TEXTURE_2D, the_texture);
                    
                    
                    

                     

                    But never:

                    glUniform1i(glGetUniformLocation(debug_shader, "texture0"), 0);
                    
                    
                    

                     

                    You should do something like:

                    glUseProgram(debug_shader);
                    
                    glUniform1i(glGetUniformLocation(debug_shader, "texture0"), 0);
                    glActiveTexture(GL_TEXTURE0);
                    glBindTexture(GL_TEXTURE_2D, the_texture);
                    
                    
                    
                    1 of 1 people found this helpful
                      • Re: Re: Re: OpenGL Compute Shader 13.4 Driver Crash/Restart Win7 64bit
                        yours3lf

                        hey there,

                         

                        I set the local size, and the uniform location, but still nothing. And it still crashes :S
                        By the way, the reason I'm not doing any glUniform1i(glGetUniformlocation(...)...) is because the locations were set in the shaders using layout qualifiers.

                        layout(binding=loc) uniform sampler2D/image2D texture0;

                        which enables me to only say:

                        glActiveTexture(GL_TEXTURE0 + loc)

                        and no need to pass the location via a uniform.

                         

                        so essentially

                        loc should be the same:

                         

                        currently I'm doing this:

                        //fill the texture with the compute shader output
                            glUseProgram(compute_shader);
                        
                            glUniform1f(1, float(frames) * 0.01f);
                        
                            glUniform1i(glGetUniformLocation(compute_shader, "texture0"), 0);  
                            glBindImageTexture(glGetUniformLocation(compute_shader, "texture0"), the_texture, 0, GL_FALSE, 0, GL_READ_WRITE, GL_RGBA8); 
                            //glBindImageTexture(0, the_texture, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA8);
                        
                            glDispatchCompute(screen.x / 16, screen.y / 16, 1);
                        
                            glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
                            glFinish(); //still needed :S
                        
                            get_opengl_error();
                        
                            //display the texture on screen
                            glUseProgram(debug_shader);
                        
                            mvm.push_matrix(cam);
                            glUniformMatrix4fv(0, 1, false, &ppl.get_model_view_projection_matrix(cam)[0][0]);
                            mvm.pop_matrix();
                        
                            glUniform1i(glGetUniformLocation(debug_shader, "texture0"), 0);  
                            glActiveTexture(GL_TEXTURE0);
                            glBindTexture(GL_TEXTURE_2D, the_texture);
                        
                            glBindVertexArray(quad);
                            glDrawElements( GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0 );