cancel
Showing results for 
Search instead for 
Did you mean: 

Archives Discussions

groovounet
Journeyman III

Sharing memory between devices?

Beta 4 is an existing release but unfortunately I don't have a Radeon HD to try this.

Is it possible with the Beta 4 to create a device using the CPU and a device using the GPU and "share" the memory between both of them so that part of the computation could be done on the CPU and part of the computation on the GPU? 

I'm not thinking about excuting the same kernels on the CPU and GPU and make them compute in parallel but more, seperating the task in several sub tasks which would be compute on the CPU or GPU according where it fits the best to use 100% of the platform.

Thanks!

Keep up the great work on this great innovation!

0 Likes
1 Reply
n0thing
Journeyman III

I haven't tried this but it could work:

Create a context using CPU and GPU as devices. Now create a memory buffer using the context, the buffer will be created on BOTH devices. Create two command queues, one for CPU and the other for GPU. Now you should be able to enqueue different kernels to the respective device's command queues, each device operating on its own local copy of the memory buffer.  

0 Likes