1 of 1 people found this helpful
You are in the same shoes as myself. I'm just ahead of putting together some classes that ease my job of multi-GPU render and calculations.
Practically OpenGL contexts are a pain. One thread can only have one OpenGL context active at a time, and one context can only be active in one thread at a time. (Mutual exclusion in both directions) wglGetCurrentContext will only acquire the context used for desktop render, more precisely the default adapter that belongs to desktop no.1.
If you would like to have multiple OpenGL contexts used at the same time with OpenCL contexts that contain them, you need at least one thread for each OpenGL-OpenCL context pair. You can use any threading class you like to achieve this. (OpenMP, MPI, Boost::Thread, SFML, etc.)
AMD OpenCL Programming Guide at the end shows sample codes in the appendix that shows how to query non-default contexts under both linux and windows. Yes, it's a pain that you have to interact with the windowing API for that. Coming to interop programming from OpenCL side, it's very strange that OpenGL has not defined standard context, device, etc. types over the past 15 years.
So to put it short, you will need at least as many threads as devices you want to use, but if you use more, and some OpenGL contexts are accessed from multiple threads, you have to mutex around setActive() functions of OpenGL. (make sure to unset contexts after finishing with them and unlocking the mutex.
Basically, multi-device interop is a real pain and is far from being straight forward to create a generic solution to many application setups.
I was told, that there will some webinar, tutorial or something about windowing system tricks for these kind of cases. (I have no idea what happend, if I pull the window over to a desktop being rendered by a different card. How should I handle this change? Must I follow with my OpenCL context the window itself?)
Anyhow, good luck!
Thank you for such a quick answer.
I hadn't seen this appendix in the AMP APP Guide before.
I just gave it the once-over, and the mechanism seems a bit awkward. I'll dig into it.
I think we are in the same situation (library to ease cl development, generic really matter, etc.), so I'll let you know if find any other help, example or information.
I'm sorry i can't help you with your interrogation.
I've look into in the appendix at the end of AMD APP Guide and found this line :
• It is recommended not to use GLUT in a multi-GPU environment.
• AMD currently supports CL-GL interoperability only in a single-GPU
So I decided to focus on the single CL-GL interop. I will come back on it later..
I have a tiny further question. Because its concerning the interop, I assumed you would know the answer.
I noticed the "cl_khr_gl_sharing" extension on my CPU device. In the case of a CPU, i dont understand the meaning of it.
1) Do I have to check the host sharing ability (it would surprise me that not every CPU are able to host a cl-gl interop program) ?
2) Is OpenGL able to emulate a GPU on a CPU (without the use of a MESA-like API..) ?
The presence of this extension bother me..
that interoperability is emulated. OpenCL runtime just copy data from OpenCL buffers/images to OpenGL viac glBufferData()/glTexImage2D(). also point of OpenGL/OpenCL interoperability is to share data with OpenGL without copying through host. so it is pointless in multi-GPU enviroment. it will just copy data under the hood.
It is far from pointless in multi-GPU environment. If you use proper object distribution, you can do parallel rendering without having to move huge datasets. This is just what I want to do in a project that would benefit from this by a lot.