Concurrent GPU accesses

Discussion created by jean-claude on Jan 11, 2009
Latest reply on Jan 12, 2009 by MicahVillmow


Assume a program that has just filled and flushed a command queue to the GPU for a batch of consecutive kernels execution. To take full advantage of processing power, the program can then still perform a bunch of CPU tasks while waiting for GPU execution completion.

Having said so, while all this stuff  is being executed, obviously the operating system still provides screen refresh, and other programs accesses to the GPU...


(1) Is there a form of scheduler for GPU allocation?

(2) What happen to the kernels queue issued by the program, is it execution being interleaved with other monitoring tasks imposed by the OS? So, what can be done to set up some form of concurrent sharing of the GPU?

(3) On Vista32, what then does the parameter TdrDelay mean exactly in windows registry, ie is this maximum delay related to the maximum time elapsed between the last command queue flush and the first subsequent task completion from the GPU?

(4) Overall, what kind of concurrent accesses monitoring is being performed under the hook by both the OS and the GPU driver, and how can this be used smartly to emulate a concurrent GPU access in a specific program?

I understand that this is a little bit tricky... but this forum is for skilled and hungry programmers isn't it?