11 Replies Latest reply on Sep 25, 2015 10:02 AM by rich99

    Why was the "flip queue size" option removed from the CCC?

    l0ki

      The driver still supports changing this value, as the registry setting can still be modified, but for quite some time this option has not been available in the Catalyst Control Center itself. This option single handedly changes my gaming experience, because I find that vertical sync has an unplayable amount of input lag without changing this. By setting the value to zero, it makes vsync much snappier and responsive, and feels closer to how it is with it off. At the default of 3 it feels like dragging my mouse through molasses.

       

      Then again I doubt I'll get any sort of definitive authoritative answer.

      Also, I've never used these forum before, but in terms of layout, they are probably the worst I've ever seen in my entire life.

        • Re: Why was the "flip queue size" option removed from the CCC?
          kingfish

          If this is the worst layout you've ever seen in your life, then you haven't lived very long.

           

          If you want to complain/whine to AMD....

           

          Email Form

           

          AMD Issue Reporting Form

          • Re: Why was the "flip queue size" option removed from the CCC?
            backFireX64

            Taken from TweakGuides site:

             

            " Flip Queue Size: This setting is similar to the 'Max Frames to Render Ahead' Nvidia setting which has been made famous by Oblivion - see this page of my Oblivion Tweak Guide. It works in much the same way, controlling the number of frames which are calculated in advance of being displayed. The default is 3, or Undefined, however by lowering this setting you may be able to resolve mouse lag problems, and even prevent graphics freezes in certain games. Experiment by setting this value to 2 first, and then if necessary try an extreme value like 0. For most people however I recommend either 3, 2 or 1 at the lowest as setting a value of 0 can disable the performance benefits of dual core CPUs for example, and in general lowering this setting will reduce overall FPS the lower the setting. You can try raising it if you want to see if you can gain performance, however again you may experience mouse lag or input lag."

             

            If the above is accurate and up to date, then it raises 2 questions:

            1. Is there a considerable fps loss in most modern games by "tweaking" this?

            2. Is there an actual benefit for today's high-dpi mouse and high-responsive monitor gaming?

             

            From my own experience, i have used a Razer, a Logitech and the all-time favored R.A.T. 9 mouse in the past 6 years on my 2ms DVI-D monitor, all with high DPI speed sensors, and never experienced any noticeable or game-breaking lag by using the default "flip queue size" setting.

            If it actually helps in some cases, and i imagine that most gpus nowadays have the needed amount of video ram capacity to handle rendered frames in advance in a host of games, then i am also wondering why this feature was dropped since ATI Tray Tools.

            • Re: Why was the "flip queue size" option removed from the CCC?
              kingfish

              So I guess the definitive answer is.......never had it to start with. TA DA

                • Re: Why was the "flip queue size" option removed from the CCC?
                  l0ki

                  I thought I remembered it being tucked in the 3D application settings several years ago, but I could be totally wrong (it wouldn't be the first time.)

                   

                  I've been mostly using ATT and RadeonPro, so I probably just got mixed up. It would be nice if it was in the CCC however. Nvidia has the option labled as pre-rendered frames in their control panel, albeit since the 300 series drivers, they have removed the ability to set it to 0, which instead just sets it to the driver or application default, making 0 a useless and asinine setting because it just defaults it back to 3. At least you can still set it to 0 with an AMD card, but the option is so hidden it's not even in the control panel

                   

                  In regards to backFireX64, it's tough to answer those questions, because input lag is going to be engine and system specific, which includes things like your monitor, and the aforementioned input peripherals, and more importantly you CPU. This setting is only relevant when using vertical sync, and for some, turning on vsync doesn't even add any additional perceptible input lag. I am also not entirely sure what the fps hit would be, because it's difficult to test something of that nature while using vsync. If you can maintain your desired framerate, the lower you set this option, the less input lag you will have. In theory it should be 3 frames less at 0, so at 60Hz, so 50ms less, which I think is a pretty big deal. Setting it to 1 is generally beneficial, and worth the trade in my opinion; I'll estimate a ~10% input lag decrease, in exchange for a ~10% lower framerate, but that's a pretty vague estimate. However, by setting it to 0, you will take a significant fps hit compared to 2, or 1.  I'm not even going to speculate on how this relates to the graphics pipeline, because I lack the technical skill.

                   

                  As a caveat to everything aforementioned, engines can specify a flip queue value, so a game could already be using a flip queue of 1 without you ever changing the driver default.

                • Re: Why was the "flip queue size" option removed from the CCC?
                  Thanny

                  At some point (Vista?), Microsoft's driver guidelines changed, so that the device driver (to be certified) was not allowed to set the queue size beyond 3.  Applications can still set it higher.

                   

                  This was done because nVidia was setting the value to absurdly high values like 10, to facilitate apparent SLI scaling.  But with so many frames in the queue, display lag was the inevitable result.

                   

                  So since the driver isn't allowed to go higher than 3, the option is pretty much pointless.  That's one of the main reasons why multiple GPU's beyond three have such poor additional scaling.  Ironically, the driver could set a larger queue in XP, but GPU counts higher than two aren't supported in that OS.

                  2 of 2 people found this helpful
                    • Re: Why was the "flip queue size" option removed from the CCC?
                      rich99

                      OP doesn't want to set the value higher, he wants to set it lower.

                      NVidia still has this option in their drivers and allows values 1 to 4.

                      I've tried changing the registry value

                      HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Contro l\Class\{4D36E968-E325-11CE-BFC1-08002BE10318}\0000\UMD

                      and it's variations to find the one currently used by the driver and after monitoring with GPUView, could not see any change in flip queue size.

                      I have a similar thread here, but asking the question as developer and didn't get any answer:

                      Setting flip-queue size in Direct3D 9 code

                      In d3d9ex and later we can set this value in code, could not find how in d3d9.

                      I've tried with the RadeonPro tool and it didn't help either.

                    • Re: Why was the "flip queue size" option removed from the CCC?
                      rich99

                      You need to create the FlipQueueSize as a String registry key and set it to 1 (not FlipQueueSize_SET like I thought before). Create it in the following, where XX is a number related to your driver. I had many on my system, so filled them all, but you can tell which one is the active driver by seeing that one is more populated than others:

                      HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4d36e968-e325-11ce-bfc1-08002be10318}\00XX\UMD\

                      Tested and it works. It will only be one more frame than Nvidia Max Frames to Render Ahead set to 1, cutting another frame requires coding from developers.

                      1 of 1 people found this helpful