My question is, if I were to enchance my OCL framework with a neat device selection model, is there any way to determine which device is used for render?
In the DX/GL interop applications upon creating context, we have to assign a valid render context also. But how does that hold device information on OCL level?
Or to paraphrase the question: how can one dynamically detect which OpenCL device is used for DX/GL render by the application. Being able to detect this could allow applications to detect whether a system has multiple GPUs inside, being able to exclude devices that are used for render, and calculate (physics, or anything else) on a different, possibly idle device. Thus with a well wirtten framework and/or application can use intelligent load balancing for all system setups.
Thank you for the quick answer. Let me ask just two more questions.
Since I am a very big fan of cross-platform and cross-vendor coding, I was looking for the DX equivalent of the function. I found something that might be what I was looking for, but I am not sure how to use it.
cl_d3d10.h holds the following typedef. (attached code) I suspect that clGetDeviceIDsFromD3D10KHR_fn is the function to call, but no other relevant info is in the header as to what parameters it would require.
And as for the second question, if I wanted to exclude device used for desktop render, I would have to pass the desktop context to this function, which would return the associated OpenCL device used for render.
(Please correct me if I'm getting something wrong in the following) Because of the everpresent non-sense of XServer-fglrx intertwine, if I'm not mistaken all devices are used for desktop render under linux. That is what DISPLAY=:0 is responsible for, to extend desktops to all devices. To prevent this from conflicting with other applications, COMPUTE=:0 has been introduced, but that does not change the fact, that desktop render is present on all devices (GUI elements and wallpaper loaded into memory, screen saver render, fade effects calculated across all devices upon screen fade...)
Anyhow, on windows systems desktop render is always HW accel if DX capable device is found. And under linux? Will calling such a function (asking for desktop context device) return anything if no graphics effects are enabled (under ubuntu let's say). Let's imagine fglrx becomes independant of XServer. Does XServer use the first OpenGL X.Y capable device to render desktop?
Edit: ok, found relevant DX function, but desktop render question still remains.
typedef CL_API_ENTRY cl_int (CL_API_CALL *clGetDeviceIDsFromD3D10KHR_fn)( cl_platform_id platform, cl_d3d10_device_source_khr d3d_device_source, void * d3d_object, cl_d3d10_device_set_khr d3d_device_set, cl_uint num_entries, cl_device_id * devices, cl_uint * num_devices) CL_API_SUFFIX__VERSION_1_0;
It is really a shame. I found that OpenCL is roughly 5-15% faster under linux, than on windows. I believe this is because even plain windows GUI uses a more graphics calls, than the little more spartan ubuntu GUI.
Compute exclusive devices would be welcome indeed.