We are considering to buy more AMD Radeon cards and i have a question about shared memory, but couldn't found any article about OpenCL 2.0 shared virtual memory, and multiple GPU combination.
What happens if I create a svm buffer and i want it to use across multiple GPU-s?
Will be this buffer accessible between multiple GPU-s simultaneously? What will determine the maximum size of the buffer? Host memory, memory of one of the devices, or the memory they have combined?
I read some articles about saying that in OpenCL 1.2 if I use a context containing multiple GPU-s, they can reach each other's data in the kernels. So if the kernel running on the first device modifies the buffer, the second device instantly sees the modification. Which is important to us is that if we have a common buffer, and the first device calculates randomly some data in the buffer, and the second gpu does this too with the same kernel, after they are done, but before the next kernel call, they should have the same current data in a buffer, it doesnt matter if its really in each GPU-s memory, or they accessing each other memory through PCI. What will determine the maximum global memory size in this case? Will it work on Radeon cards, or its just FirePro feature?