I'm trying to use VAO for interop between OGL and OCL on Debian 3.2.35-2 x86_64 with ATI drivers 9.012-121219a-151962C-ATI (HD 6670 card)
First, I create two VAO objects, then use them using clCreateFromGLBuffer(...) as output buffer in OCL.
Everything is OK as long as I'm not calling the glDrawArrays(GL_POINTS, ....).
If I'm not calling it, the OCL output buffer is OK, and when I check the OGL buffer it is created from, I get the same data.
If I'm calling the DrawArrays, the OCL output buffer is still OK, but the OGL buffer is never updated after the first DrawArrays, it's like the OCL and OGL buffer where unlinked after the first draw ...
I'm using the clEnqueueAcquireGLObjects(...) and get no OCL nor OGL error at all (I check all commands).
I check my code with an nVidia card, and I have not this problem (but others on texture use in geometry shaders ... working perfectly on ATI cards !!!! ... but this is another point ... for another forum ... ).
Any idea ?
Do you mean VBO? Because VAO is something different in modern OpenGL. Did you tried SimpleGL example from SDK? Also you need to create OpenGL VBO after creation of OpenCL context. That mean create OpenGL context then create OpenCL context then create VBO and finally create OpenCL shared object from it.
Here is my code (simplified) :
OGL, then OCL init from OGL context
// OGL Buffer creation
// Create the treated data buffer for OGL input (Not initialized but created on graphic card)
// Updated by OCL (or manually while OCL/OGL interop not working)
char *tmpData = (char *)malloc(SIZE);
glBufferData(GL_ARRAY_BUFFER, SIZE , tmpData, GL_STATIC_DRAW);
glVertexAttribPointer(0, 1, GL_UNSIGNED_BYTE, GL_FALSE, 0, 0);
// OCL buffer from OGL buffer
mem = clCreateFromGLBuffer(oclcontext,CL_MEM_READ_WRITE,_vboTreatedData0,&lasterror);
// then the display loop with the OCL treatment
clEnqueueAcquireGLObjects(cq,1, &mem, 0, NULL, NULL);
clEnqueueReleaseGLObjects(cq,1, &mem, 0, NULL, NULL);
glDrawArrays(GL_POINTS, 0, SIZE);
If the glDrawArrays is not called, all buffers are updated as expected.
If it's called, the OGL buffer is only updated the first time. Then it's no more updated, but the OCL buffer that was created from it is still updated.
And it work as expected on nVidia hardware.
So it may be the way I draw and update the buffers, or driver related problem.
I'll try to extract my code to do a sample app ...
I don't see any clFinish() and glFinish(). If you don't use sync object you need ensure proper synchronization with these calls. Also AMD OpenCL doesn't start execution until you call clFlush() or another API call which call clFlush() implicitly like synchronous data transfers.
Can you confirm, if you have checked the relevant APP SDK Samples??
I would request you to attach an appropriate test case (as Zip file) in case problem persists.
Yes. Other samples using interop are working (APP SDK sample). I'll try to extract a test case from my application to reproduce this problem. But I'll be out of office for 2 weeks, so I may not be able to provide it to you before the 13th may. Thanks.
I had an issue like this, and I think it boils down to the order of creation of OpenCL vs OpenCL. The OpenCL context have to be created before any OpenGL objects are created, preferably immediately after the OpenGL context is created.
Back on this project ! I have pointed out the problem origin :
- create an input CLBuffer (1)
- create a VBO buffer (3) with OGL, then an CLbuffer (2) from this GL VBO
then the loop :
- update input CLbuffer (1)
- launch a simple "copy" CL kernel from CLbuffer (1) to CLBuffer (2)
- display the content of the updated VBO Buffer (3) with glDrawArrays(GL.GL_POINTS,....)
On the first loop : everything is fine !
On the next ones : the VBO buffer is not updated.
If I remove the glDrawArrays call, the VBO is correctly updated on every loop !!!!
I have done tests on Linux and Windows with various AMD cards (HD7000 and mobility M6000) and drivers (including the last ones) with same results : only first update of VBO !
I have done tests using NVidia cards : everything work as expected !
So seams to be AMD drivers related.
Thanks for any help.
UPDATE : Problem only appear when using GL_UNSIGNED_BYTE, no problem with GL_FLOAT !
glVertexAttribPointer(0, 1, GL_UNSIGNED_BYTE, false, 1, 0);
Here are 2 running samples on linux 64b :
- TestAMD_UNSIGNED_BYTE showing the problem
- TestAMD_UNSIGNED_SHORT working as expected
Both use the same code. Only the data format in the VBO is changed.
This problem is heavily impacting my project now in production phase, so if you could find a quick solution it would be great !