our application uses OpenCL computations for quite some time now. For other reasons we have always used NVIDIA GPUs so far, but since their OpenCL performance got worse and worse we are in the middle of switching to AMD. This has worked out quite nicely until we noticed some problems after switching to the 13.12 driver. At first we didn't suspect the driver but changes we made to our code. However, in the meantime we have narrowed it down to the driver: version 13.11 beta 9.4 works fine, version 13.11 beta 9.5, 13.12 and also 14.1 beta 1.6 don't.
Let me try to describe the problem: when starting our application we set up the OpenCL framework (context, device(s), command queue, compiling kernels). This always works just fine. Later we actually start using the GPU via OpenCL, i.e. we write to the GPUs memory, enqueue kernels and read our results. Since 13.11 beta 9.5 this does not always work fine! Sometimes it does, sometimes the blocking read of the GPUs memory simply never finishes, the calling thread on the CPU hangs and we have to kill our application.
However, reading back the results does not seem to be the problem. When monitoring the GPU with GPU-Z we normally see an increase in clock frequency, memory usage and GPU load. If the problem occurs, nothing changes at all. It seems that not a single command we fill the queue with is ever executed. We have no idea what's happening here.
Again: the same application on the same system runs fine with 13.11 beta 9.4. Using a later driver version sometimes leads to the described problem. You can have 10 successfull runs and the next one fails without warning.
Observed on multiple systems with different specs, which all use a Radeon R9 280X and run on Windows 7 x64.
Any help is very much appreciated! Thanks.