I'm looking for en example for multi gpu handling, I see SimpleMultiDevice example showing how to create multiple command queues on single context and where the input data is split into two halves with each half executed on a different GPU.
In my case I'm looking for an example of having where there are the two different programs which each one has its own kernels and the job is plit for two GPUs. The first program processes input Data and its executed on one GPU. Afterwads 1st GPU sends data to the second GPU. Here I await the output data for rendering.
Is there any code pattern demosntrating this desing?
How to modify "SimpleMultiDevice" to have tasks parallel processed?