Pillard

Lag when reading CL buffers from GL using clCreateFromGLBuffer ?

Discussion created by Pillard on Jun 7, 2010
Latest reply on Jun 7, 2010 by Pillard
I'm trying to use OpenCL along with openGL. I get a huge lag (almost 1sec) every 3-4 seconds. What am I doing wrong ???

Hello all !

I'm trying to get openCL to read buffers outputted from openGL (the Z buffer actually). It works pretty well, except that I get a huge slow down of approx. 1 sec long every 3-4 seconds....

Here is the code I use to retrieve the GL Z buffer, does someone see something very, very wrong in the code below ?? (this code gets executed at every frame). It doesn't do anything, as I removed everything I could in order to isolate the instruction creating the lag (apparently it is caused by "clCreateFromGLBuffer"). Note: the GL buffers are created earlier during init().

EDIT: nop, I was wrong: clCreateFromGLBuffer works fine, the lag comes from clEnqueueAcquireGLObjects()... maybe a synchronisation issue ?

glBindBuffer(GL_PIXEL_PACK_BUFFER, GLBuffers[0]); unsigned int size = resoX*resoY*2*sizeof(cl_float)*maxLocComplexity*2; if (!initDone) depths = (cl_float*)malloc(size); memset(depths, 0, size); glBufferData(GL_PIXEL_PACK_BUFFER, size, (GLvoid*)depths, GL_STATIC_READ); glReadPixels(0, 0, resoX*2*maxLocComplexity, resoY*2, GL_DEPTH_COMPONENT, GL_FLOAT, 0); depthBuf = clCreateFromGLBuffer(context, CL_MEM_READ_ONLY, GLBuffers[0], &status); if(!sampleCommon->checkVal( status,CL_SUCCESS, "createFromGL failed. (GLBuffers[0])")) return SDK_FAILURE; status = clEnqueueAcquireGLObjects(commandQueue, 1, &depthBuf, 0, 0, NULL); status = clEnqueueReleaseGLObjects(commandQueue, 1, &depthBuf, 0, 0, 0); glBindBuffer(GL_PIXEL_PACK_BUFFER, 0); clReleaseMemObject(depthBuf); return 0;

Outcomes