0 Replies Latest reply on Jun 23, 2014 10:22 AM by jseb

    Broken D3D9's TimeStamp Query accuracy: Our game won't be able to use a downscale optimisation on your cards

    jseb

      Hello,

       

      A few years ago, we talked about the inaccuracy of D3D9's TimeStamp query (I'm referring to an email conversation with devrel@amd.com which looks to be down now).

      At this time, this issue wasn't so important to us because we used it only to debug gpu performance.

       

      Now, we've a shipped game which is using the GPU timer to select a possible downscale of particles render.

       

      We need that downscale because the particle GPU cost may be higher than the usual case.

      There are 2 main reasons:

      a) particles fill rate (gpu cost) is dependant on multiplayer gameplay condition (which is hard to predict)

      b) players can tweak/create their own particles FXs

       

      Without that downscale, player may experience a fps drop due to high GPU cost.

      With that downscale we prefer to reduce particle FXs quality to preserve fps.

       

      Due to the bug in your driver, we've disabled this downscale feature for all of our players using an AMD graphic device

      with a D3D9_32b game build (which is the only one they have yet).

       

      Then players using AMD graphics will still see their fps dropping in huge particle FXs conditions.

       

      I've narrowed down the issue:

      a) I never see such an issue with NVidia drivers

      b) There is no issue when the application (our game) is a x64 build (yet our players only have access to a D3D9 32bit version)

      c) The issue is present with D3D9 and D3D9Ex, but not with D3D11

      d) Some of the lower bits of TimeStamp 64bit value are always '0'

      e) The accuracy of TimeStamp is maximal when PC starts (or recovered from a standby/hibernation),

      and this accuracy is getting lower the more time is spent (the number of lower bit set to '0' (e) is raising over the time)

       

      About (e): accuracy loss passes from instance from

        9bits (0.02ms) to 10bits (0.04ms) in    2mn30s

      10bits (0.04ms) to 11bits (0.08ms) in  5mn18s

      11bits (0.08ms) to 12bits (0.15ms) in 10mn36s

      12bits (0.15ms) to 13bits (0.30ms) in 21mn

      ...

       

      So after around 40mn after the PC is started, the accuracy is only about 0.60ms, and in around 1h20mn only 1.20ms.

      This is too low to get sub-frame gpu profiling !!

       

      It looks like there is an integer overflow somewhere in your 32bit driver.

      The strange thing is that your frequency is yet lower than other vendors where there is no issue

      (you're using a 27MHz clock while others uses up to 100MHz one).

       

      I do not understand why this bug could not be fixed for your D3D9/D3D9Ex driver.

      Could you confirm there is no solution to fix that issue ?

       

      Again, due to this driver issue, we must have disabled an optimisation feature on your graphic devices !
      (Then our game may runs faster on other Vendors, assuming GPUs have close performance)


      I'm using an AMD Radeon HD 6900 (2GB), with latest Catalyst (14.4, but the issue was the same with 11).

      Thanks

      JSeb