4 Replies Latest reply on Sep 28, 2012 9:06 AM by Meteorhead

    CL-GL interop issue

    Meteorhead

      Hi!

       

      I have an application, that used to work, but now it does not. It's a less simple interop application, and the problem is that kernels with interop buffers don't seem to run. I tried to debug my application with CodeXL, but as I mentioned in the VS2012 add-in tools discussion, it wrecked my computer. There is a part in the code that does the update of vertices, and there is a very simple kernel that copies from on buffer to the other. However, no update is done by the kernels. If I fetch the data by hand, update the vertices on the host, copy result back, it displays correctly.

       

      What's even stranger, that the code works perfectly on linux (CPU-GPU both), but on Windows it works on the CPU, but not on the GPU. Here is part of the code:

       

      OpenGL.lock();

      mainWindow->setActive();

      CL_err = auxRuntime->appQueues[0].enqueueAcquireGLObjects(&vertexBuffs, NULL, NULL);

      auxRuntime->checkErr(CL_err, cl::ERR, "clEnqueueAcquireGLObjects(vertexBuffs)");

       

      // The way it should work...

      CL_err = auxRuntime->appQueues[0].enqueueNDRangeKernel(upd_vtx, cl::NullRange, upd_vtx_global, upd_vtx_local, NULL, NULL);

      auxRuntime->checkErr(CL_err, cl::ERR, "appQueues[0].enqueueNDRangeKernel(upd_vtx)");

       

      // And the way it actually works.

      CL_err = auxRuntime->appQueues[0].enqueueReadBuffer(dataBuff, CL_TRUE, 0, data4.size() * sizeof(REAL4), data4.data(), NULL, NULL);

      auxRuntime->checkErr(CL_err, cl::ERR, "clEnqueueReadBuffer(vertexBuffs)");

      #pragma omp parallel num_threads(4)

      {

          for(int y = 0 ; y < sysHeight ; ++y)

              for(int x = 0 ; x < sysWidth ; ++x)

              {

                  cl_float4 temp;

                  temp.s[0]=(float)x;

                  temp.s[1]=(float)y;

                  temp.s[2]=(float)data4[y*sysWidth+x].s[0];

                  temp.s[3]=(float)data4[y*sysWidth+x].s[1];

                  mesh[y*sysWidth+x] = temp;

              }

      }

      glBindBuffer(GL_ARRAY_BUFFER, m_vbo); auxRuntime->checkErr(glGetError(), cl::ERR, "glBindBuffer(m_vbo)");

      glBufferSubData(GL_ARRAY_BUFFER, 0, mesh.size() * sizeof(REAL4), mesh.data());

      auxRuntime->checkErr(glGetError(), cl::ERR, "glBufferSubData(m_vbo)");

      glBindBuffer(GL_ARRAY_BUFFER, 0); auxRuntime->checkErr(glGetError(), cl::ERR, "glBindBuffer(0)");

       

      CL_err = auxRuntime->appQueues[0].enqueueReleaseGLObjects(&vertexBuffs, NULL, NULL);

      auxRuntime->checkErr(CL_err, cl::ERR, "clEnqueueReleaseGLObjects(vertexBuffs)");

       

      CL_err = auxRuntime->appQueues[0].finish(); auxRuntime->checkErr(CL_err, cl::ERR, "clFinish()");

      imageDrawn = false;

      mainWindow->setActive(false);

      OpenGL.unlock();

       

      No function reports an error, and there can be no mistake in the init part either, if it works on the CPU, not to ention GPU on linux. I have no idea why this happens. The application does no black magic, but animates a point cloud that is the result of physics calculations.

       

      Any ideas?

        • Re: CL-GL interop issue
          alariq

          I would say there is not enoughf info. But i had a similar problem when switching from CPU to GPU. Everything worked on CPU while giving wrong results on GPU. Then i figured out that actual buffer creation flags were wrong and i also was issuing clEnqueueMapBuffer() with CL_MAP_READ, while actually writing data to it :-) and it worked on CPU because there it just returns pointer. I would suggest check the flags. But providing kernel or working code would also help.

           

          creation flags were
          Détecter la langue » English
            • Re: CL-GL interop issue
              Meteorhead

              Your reply is a little strangely formatted, but it my be my browser only.

               

              I would suspect flags or other mistake too, but on linux it works on the GPU, and the code is identical to the last bit (apart from the creation of the shared context, where naturally types differ).

               

              The application uses SFML and SFGUI, prebuilt libraries are also attached (NOTE: zip files do not contain a root directory for extracted files). VS2010 project and linux Makefile also included (no linux prebuilt binaries for depndencies). I have no idea what causes the problem.

               

              Once application start, press InitCL button, then when "Exiting InitCL" is printed, press InitSim. Once "Exiting InitSim" is printed, and no fatal errors were encountered, press StartSim. On Windows, the initial mesh is displayed, although it should be animated by the upd_vtx kernel. Simulation data changes, as the commented out part proves.

               

              EDIT: I forgot to add prebuilt GSL libs, which are surely required for out of the box compilation.

                • Re: CL-GL interop issue
                  alariq

                  Hi, I've run you program. Same thing.

                  However, i could not figure out what can be wrong there :-(

                  Indded, seems that thre is no errors. However if i run it under CodeXL, after pressing Start Sim.

                  it gives errors like those:

                   

                  DEBUG: Setting shader variables

                  DEBUG: Locking OpenGL context

                  DEBUG: Unlocking OpenGL context

                  DEBUG: Exiting initSim()

                  Failed to activate the window's context

                  Failed to activate the window's context

                  Failed to activate the window's context

                  ERR  : glBindVertexArray(m_vao) (1282)

                  ERR  : glDrawArrays (1282)

                  ERR  : glBindVertexArrays(0) (1282)

                  Failed to activate the window's context

                  Failed to activate the window's context

                  Failed to activate the window's context

                  Failed to activate the window's context

                  Failed to activate the window's context

                  An internal OpenGL call failed in Texture.cpp (380) : GL_INVALID_OPERATION, the

                  specified operation is not allowed in the current state

                   

                  I think it is a false alarm,  but as far as i understand m_vao is actually buffer where you write updated points data, so maybe there is a sense here.. unfortunately i can't figure out the problem right now, probab;y something related to gl context

                   

                   

                   

                   

                  Détecter la langue » English
                    • Re: CL-GL interop issue
                      Meteorhead

                      Failed to activate the window's context is an SFML internal error message that pops up if the makeCurrent() fails upon trying to set the window's context as the active one. I suspect CodeXL is accessing the context at the same time. Or is that impossible? I highly doubt that sync issues would airse, as I was extra careful at framing all my OpenGL operations by locking-unlocking mutexes.

                       

                      After seeing so many context errors, I'm not surprised there is a Texture.cpp error. Most likely the context is not set while some operation is trying to run.

                       

                      Anyhow, thanks for your effort. I hope someone else also stumbles upon this thread and tries to solve the problem.