The driver still supports changing this value, as the registry setting can still be modified, but for quite some time this option has not been available in the Catalyst Control Center itself. This option single handedly changes my gaming experience, because I find that vertical sync has an unplayable amount of input lag without changing this. By setting the value to zero, it makes vsync much snappier and responsive, and feels closer to how it is with it off. At the default of 3 it feels like dragging my mouse through molasses.
Then again I doubt I'll get any sort of definitive authoritative answer.
Also, I've never used these forum before, but in terms of layout, they are probably the worst I've ever seen in my entire life.
I'm honestly not interested in getting into a pissing contest about the site layout, as it's pretty ancillary to my main question. I found navigating the forum frustrating, but that's just my opinion on the matter.
The more pertinent point is as to why a feature was removed from the catalyst control center for seemingly no good reason. It still works if you change the registry value, but I only figured that out after a lot of searching. So, if the driver still supports it, and the option used to be there to change it in the Catalyst Control Center, why can you no longer do that?
Taken from TweakGuides site:
" Flip Queue Size: This setting is similar to the 'Max Frames to Render Ahead' Nvidia setting which has been made famous by Oblivion - see this page of my Oblivion Tweak Guide. It works in much the same way, controlling the number of frames which are calculated in advance of being displayed. The default is 3, or Undefined, however by lowering this setting you may be able to resolve mouse lag problems, and even prevent graphics freezes in certain games. Experiment by setting this value to 2 first, and then if necessary try an extreme value like 0. For most people however I recommend either 3, 2 or 1 at the lowest as setting a value of 0 can disable the performance benefits of dual core CPUs for example, and in general lowering this setting will reduce overall FPS the lower the setting. You can try raising it if you want to see if you can gain performance, however again you may experience mouse lag or input lag."
If the above is accurate and up to date, then it raises 2 questions:
1. Is there a considerable fps loss in most modern games by "tweaking" this?
2. Is there an actual benefit for today's high-dpi mouse and high-responsive monitor gaming?
From my own experience, i have used a Razer, a Logitech and the all-time favored R.A.T. 9 mouse in the past 6 years on my 2ms DVI-D monitor, all with high DPI speed sensors, and never experienced any noticeable or game-breaking lag by using the default "flip queue size" setting.
If it actually helps in some cases, and i imagine that most gpus nowadays have the needed amount of video ram capacity to handle rendered frames in advance in a host of games, then i am also wondering why this feature was dropped since ATI Tray Tools.
I personally do not remember CCC ever having it... but it has been a long time. I know RadeonPro still has it though.
I thought I remembered it being tucked in the 3D application settings several years ago, but I could be totally wrong (it wouldn't be the first time.)
I've been mostly using ATT and RadeonPro, so I probably just got mixed up. It would be nice if it was in the CCC however. Nvidia has the option labled as pre-rendered frames in their control panel, albeit since the 300 series drivers, they have removed the ability to set it to 0, which instead just sets it to the driver or application default, making 0 a useless and asinine setting because it just defaults it back to 3. At least you can still set it to 0 with an AMD card, but the option is so hidden it's not even in the control panel
In regards to backFireX64, it's tough to answer those questions, because input lag is going to be engine and system specific, which includes things like your monitor, and the aforementioned input peripherals, and more importantly you CPU. This setting is only relevant when using vertical sync, and for some, turning on vsync doesn't even add any additional perceptible input lag. I am also not entirely sure what the fps hit would be, because it's difficult to test something of that nature while using vsync. If you can maintain your desired framerate, the lower you set this option, the less input lag you will have. In theory it should be 3 frames less at 0, so at 60Hz, so 50ms less, which I think is a pretty big deal. Setting it to 1 is generally beneficial, and worth the trade in my opinion; I'll estimate a ~10% input lag decrease, in exchange for a ~10% lower framerate, but that's a pretty vague estimate. However, by setting it to 0, you will take a significant fps hit compared to 2, or 1. I'm not even going to speculate on how this relates to the graphics pipeline, because I lack the technical skill.
As a caveat to everything aforementioned, engines can specify a flip queue value, so a game could already be using a flip queue of 1 without you ever changing the driver default.
At some point (Vista?), Microsoft's driver guidelines changed, so that the device driver (to be certified) was not allowed to set the queue size beyond 3. Applications can still set it higher.
This was done because nVidia was setting the value to absurdly high values like 10, to facilitate apparent SLI scaling. But with so many frames in the queue, display lag was the inevitable result.
So since the driver isn't allowed to go higher than 3, the option is pretty much pointless. That's one of the main reasons why multiple GPU's beyond three have such poor additional scaling. Ironically, the driver could set a larger queue in XP, but GPU counts higher than two aren't supported in that OS.