We’ve been covering AMD’s Ryzen and Navi announcements at E3 throughout the week, with one more aspect of the situation left to discuss. While we’ve discussed Navi and its RDNA architecture, we haven’t talked about any of the software improvements AMD intends to offer with its next GPUs. Some of these gains will also be available to GCN cards as well.
Let’s talk about some features and improvements.
First up, there are the general quality-of-life gains baked into AMD’s Radeon Software. With Navi, the system will automatically drop your TV into its low-latency game mode if the display supports one. You’ll be able to save settings to a separate file and re-import them if you need to install the driver completely from scratch or are reinstalling your entire OS. There are also some improvements baked into how WattMan reports its results.
AMD’s Link streaming application now supports streaming to TV boxes, including Apple and Android TV. Wireless VR streaming is now supported as well. These improvements are not gated to any specific GPU.
Radeon Chill is AMD’s technology to reduce GPU power consumption when gaming. The software can now set frame rate caps on 60Hz displays to reduce the number of frames rendered when you aren’t actively controlling your character due to being afk.
AMD’s footnote on Radeon Chill is worth reading. Under the right circumstances, it can significantly reduce GPU power consumption, though this does impact frame rate, and the total size of the gain varies from title to title. Any GPU that previously used Radeon Chill can take advantage of these improvements.
Next up, Radeon anti-lag. According to AMD, it has invented a method of reducing the length of time between when you hit a button in a game and when you see the results of doing so. This is accomplished by delaying some CPU work to ensure it occurs simultaneously alongside the GPU rather than being completed in advance.
I cannot honestly say that I observed a difference between having Radeon Anti-Lag enabled versus disabled. AMD demonstrated that the effect was working using custom-built latency monitors clipped to displays, and I believe the company that the monitor I tested had slightly better latency. I’m at an age where motor reflexes have already begun to decline, and if I’m being honest, I was never a particularly good twitch gamer to start with.
Best-case, this feature shaves a few milliseconds off your total latency. If you’re a good enough gamer to compete in these spaces to begin with, that might genuinely be worth something. It’s not something I feel capable of commenting on.
Anti-lag is supported in DX11 on all AMD GPUs. Support for DX9 games is a Navi-only feature. DX12 games are not currently supported due to dramatically different implementation requirements in that API.
Radeon Image Sharpening is a feature that pairs contrast adaptive sharpening with the use of GPU upscaling techniques to improve overall image quality above baseline without requiring the penalty of native 4K rendering. The following slides compare RIS on versus RIS off.
RIS is off in the slide above.
RIS is on in this slide. The effect is very subtle. You may want to open both of the images above in separate tabs, zoom in carefully, and then compare the final product. While there’s a definite IQ improvement in the “ON” image, it’s a small one.
Still, small improvements to IQ are generally welcome. RIS was also designed by Timothy Lottes, who worked on FXAA at Nvidia. There’s no expected performance impact from using the feature (estimated performance hit is 1 percent or less). RIS is a Navi-only feature and is only supported in DX12 and DX9.
Finally, there’s FidelityFX.
FidelityFX is AMD’s new addition to GPUOpen, and is a capability it’s releasing to any developer that wants to take advantage of it. The Contrast Adaptive Sharpening tool can be used on any GPU if developers want to do so.
A few more hardware details on Navi that didn’t make it into earlier stories but probably should’ve (blame a frenetic briefing schedule and some jumbled note-taking):
AMD plans to keep GCN GPUs in-market to handle HPC workloads. The AMD engineer we spoke with compared GCN with an enormously effective broadsword if swung properly, but as being relatively cumbersome to use, while RDNA was more of a lightsaber in terms of focusing on elegance and economy of motion. GPUs like the MI50 and MI60 also offer far more memory bandwidth and larger memory pools than any of the Navi cards coming to market in the near future.
RDNA is expected to eventually replace GCN in this space and has some fixes for slow-path quirks that GCN suffered from. Irregular performance with certain texture formats has been fixed, for example, and RDNA has larger caches to prevent pipeline bubbles. Overall performance should be more predictable with RDNA-derived GPUs than it was with GCN.
And gimmicks. Still ticks me off that Navi XT is going to be more expensive than the RTX 2070, draw more power, no doubt be noisier, and not effectively be any faster...
Isn't this where you say AMD's next line of GPU's after Navi is going to be the best and destroy the competion? That's why I always tell people to buy a card now if they need one and not wait for a future AMD release. It's turning into wash, rinse and repeat with them.
No, this is where I say "Navi, as it is being presented, has the potential to be a great card, now and in the future, but Lisa Su's swelled head is preventing it from being one. "
According to AMD's presentation sides Navi XT and the RTX 2070 will be effectively equal in power on average
So when Navi XT custom boards come out, they're going to be priced exactly the same, maybe a few dollars cheaper, as the RTX 2070, assuming nVidia doesn't cut the prices which they likely won't but could easily do.
Now if Navi XT were to be priced at $349 and custom Navi XT boards $399 then yea, AMD would have a good chance, but as it stands they're going to be competing against nVidia with a board which produces more heat, consumes more power, and lacks ray tracing (yes I know, but it's the current buzz word, and no doubt will be mentioned by every single reputable review site as a con and a reduction in rating), in a marketplace utterly dominated by nVidia.
So the question is, what is AMD bringing to the table to sway people away from nVidia?
Don't get me wrong, Lisa Su has turned AMD around from bankruptcy to very profitable, but she's done it on the strength of a processor which, even though not as fast as Intel's, is priced much better. But she doesn't see the graphics market that way, she sees semi-custom and professional level graphics as what's important. It was when AMD was cash starved and a massive amount of RTG resources were poured into the semi-custom and professional market, but this is 2019, AMD's in profit, it's time for AMD to get back into the game before Intel makes it that much harder.
I am bit jaded at this point. I view Chill as an epic failure to properly design a product. An admission that the hardware is cranked way too high to appear more competitive in benchmarking/reviews. Then quite a few consumers know to undervolt (aka run at normal level).
Wonder why RIS is NAVI only? Is there a hardware component to it, or is it market driven enticement?
I am puzzled at why Anti-Lag supports all GNC cards for DX11 but not DX9, which is Navi only.
All these new gimmicks add up to a massive fail in my opinion. Before Crimson, you never read a complaint of "I can't control my fans, I can't get my fans to stop spinning at the destructive 20% minimum. Chill doesn't work (except for listed/supported games) boo hoo. Remember 'Surface Format Optimization' was where AMD got caught fudging comparative fps test/results against Nvidia? Great sounding name that people still enable today ..not having a clue as to what it does (lowering other graphics settings). I wish AMD would stick to developing graphics cards and driver releases that work. Forget the fluff.
They're are all over the place. Just noticed this after reading your reply, "RIS is a Navi-only feature and is only supported in DX12 and DX9". So no dx11? Does that mean dx11 already has something? If not, seems like a strange omission, skip the current heaviest use dx api.
I like the performance monitor that was released with the adrenaline drivers. Awesome way to monitor fps, clocks etc. during gameplay in an easy to chart format. It pretty much turns any game into a benchmark. So it isn't ALL fluff.