2 Replies Latest reply on Apr 23, 2012 1:22 PM by nyanthiss

    OpenCL gpu tuning

    nyanthiss

      Hi,

       

      I have 2 gpus, one integrated radeon hd 3300 and one hd 7950. I want to use the 3300 as display device and 7950 for computing.

       

      AFAIK neither of them support preemption, so the driver pushes the commands to gpu in small batches. This results in 100% cpu usage. Is there some environment variable(s) i could use to lower CPU usage ? I know there used to be FLUSH_INTERVAL and various CAL env variables, but those doesn't seem to work with OpenCL

       

      Cheers

        • Re: OpenCL gpu tuning
          arsenm

          The radeon 3300 won't support OpenCL at all. As far as I understand the 7950 should have this feature enabled in a future driver update.

          The 100% CPU usage isn't related to using small packets. Using clFinish, clWaitForEvents etc. using 100% CPU is a driver problem that comes and goes on both AMD and Nvidia. I somewhat work around this by estimating the execution time and then sleeping for a bit before using these functions.

          1 of 1 people found this helpful
            • Re: OpenCL gpu tuning
              nyanthiss

              arsenm wrote:

               

              The radeon 3300 won't support OpenCL at all. As far as I understand the 7950 should have this feature enabled in a future driver update.

              Right, i don't intend to do opencl on the 3300; i only want to use it to display desktop, while using 7950 purely as compute device.

               

              The 100% CPU usage isn't related to using small packets. Using clFinish, clWaitForEvents etc. using 100% CPU is a driver problem that comes and goes on both AMD and Nvidia. I somewhat work around this by estimating the execution time and then sleeping for a bit before using these functions.

              As far as i understand, you have to send commands to the GPU in small pieces, since it's not preemptible, and it has to handle graphics alongside opencl. In case it stops responding to windows, TDR kicks in and resets the GPU. I know i can turn off TDR; i was wondering if there is some knob for the drivers to treat the device more like a compute one and not care too much about preemption...