Hi, I have the following sync issue:
I have a kernel that does some output to a buffer. after calling the kernel I issue a cl::finish (also tried waitforevents, enqueueBarrier etc.) on that queue and blocking-read that buffer to host memory. If the device is under heavy load (3 other GPU boinc projects are running) there is a low chance(less than 1%) that the results of last several threadblocks(10-70, proportional to number of CUs) are not written by the time copy occurs. When this happens, profiling info for the kernel event end time is also bogus. If I do a loop that retries reads until whole buffer is non-zero(it was initialized to 0 for this experiment), then it will be eventually flushed after 1-50ms. My guess is that global cache is not flushed to memory at the end of kernel, because the problem occurs at 128Byte granularity, while my writes are not 128byte aligned (so there are some warps whose output is partially available).
Workaround: The problem disappears if I queue a dummy kernel that uses same buffer, even if I don't read from that buffer in the dummy kernel. A memory read, after dummy kernel finishes, provides correct and complete data.
Is this a known issue? I'll provide "streamlined" repro code if needed.
win 7 pro x64
260X or 290x
catalyst 14.9 or 14.11.2
32bit or 64bit executable
heavy GPU load from other tasks unveils a race condition