Meteorhead

CL-GL interop issue

Discussion created by Meteorhead on Sep 27, 2012
Latest reply on Sep 28, 2012 by Meteorhead

Hi!

 

I have an application, that used to work, but now it does not. It's a less simple interop application, and the problem is that kernels with interop buffers don't seem to run. I tried to debug my application with CodeXL, but as I mentioned in the VS2012 add-in tools discussion, it wrecked my computer. There is a part in the code that does the update of vertices, and there is a very simple kernel that copies from on buffer to the other. However, no update is done by the kernels. If I fetch the data by hand, update the vertices on the host, copy result back, it displays correctly.

 

What's even stranger, that the code works perfectly on linux (CPU-GPU both), but on Windows it works on the CPU, but not on the GPU. Here is part of the code:

 

OpenGL.lock();

mainWindow->setActive();

CL_err = auxRuntime->appQueues[0].enqueueAcquireGLObjects(&vertexBuffs, NULL, NULL);

auxRuntime->checkErr(CL_err, cl::ERR, "clEnqueueAcquireGLObjects(vertexBuffs)");

 

// The way it should work...

CL_err = auxRuntime->appQueues[0].enqueueNDRangeKernel(upd_vtx, cl::NullRange, upd_vtx_global, upd_vtx_local, NULL, NULL);

auxRuntime->checkErr(CL_err, cl::ERR, "appQueues[0].enqueueNDRangeKernel(upd_vtx)");

 

// And the way it actually works.

CL_err = auxRuntime->appQueues[0].enqueueReadBuffer(dataBuff, CL_TRUE, 0, data4.size() * sizeof(REAL4), data4.data(), NULL, NULL);

auxRuntime->checkErr(CL_err, cl::ERR, "clEnqueueReadBuffer(vertexBuffs)");

#pragma omp parallel num_threads(4)

{

    for(int y = 0 ; y < sysHeight ; ++y)

        for(int x = 0 ; x < sysWidth ; ++x)

        {

            cl_float4 temp;

            temp.s[0]=(float)x;

            temp.s[1]=(float)y;

            temp.s[2]=(float)data4[y*sysWidth+x].s[0];

            temp.s[3]=(float)data4[y*sysWidth+x].s[1];

            mesh[y*sysWidth+x] = temp;

        }

}

glBindBuffer(GL_ARRAY_BUFFER, m_vbo); auxRuntime->checkErr(glGetError(), cl::ERR, "glBindBuffer(m_vbo)");

glBufferSubData(GL_ARRAY_BUFFER, 0, mesh.size() * sizeof(REAL4), mesh.data());

auxRuntime->checkErr(glGetError(), cl::ERR, "glBufferSubData(m_vbo)");

glBindBuffer(GL_ARRAY_BUFFER, 0); auxRuntime->checkErr(glGetError(), cl::ERR, "glBindBuffer(0)");

 

CL_err = auxRuntime->appQueues[0].enqueueReleaseGLObjects(&vertexBuffs, NULL, NULL);

auxRuntime->checkErr(CL_err, cl::ERR, "clEnqueueReleaseGLObjects(vertexBuffs)");

 

CL_err = auxRuntime->appQueues[0].finish(); auxRuntime->checkErr(CL_err, cl::ERR, "clFinish()");

imageDrawn = false;

mainWindow->setActive(false);

OpenGL.unlock();

 

No function reports an error, and there can be no mistake in the init part either, if it works on the CPU, not to ention GPU on linux. I have no idea why this happens. The application does no black magic, but animates a point cloud that is the result of physics calculations.

 

Any ideas?

Outcomes