For those who play Cyberpunk and use a AMD 6000 Series GPU card concerning Ray Tracing feature: https://www.pcgamer.com/cyberpunk-2077-ray-tracing-amd-graphics-card-rdna-2/
There's little to gain and much to lose from Cyberpunk's demanding ray tracing settings.
With the arrival of a mammoth patch from CD Projekt Red for Cyberpunk 2077 it is now possible to enjoy the shiny floors and bright lights of Night City with an AMD graphics card. Specifically, the Radeon RX 6000-series cards, fitted with the latest RDNA 2 architecture, are now capable of a little real-time ray tracing in Cyberpunk 2077.
And I say 'a little' for good reason. The latest Cyberpunk patch is a more than modest 33GB install (actually a little more space will be required to download), and its significant size is largely explained by the 484 bug fixes it includes for the PC version. Bundled alongside that is the arrival of AMD-powered real-time ray tracing in-game, a feature notable by its omission from the launch day roster.
It's an interesting prospect for anyone rocking the latest AMD silicon, too, as Cyberpunk 2077 may pose the biggest challenge for RDNA 2 silicon since, well, ever. The game is tough going without even a drop of ray tracing to speak of. Furthermore, AMD is contending with more powerful ray tracing silicon in Nvidia's RT Cores, which have proven themselves more capable than AMD's Ray Accelerators.
With that being said, the truth lies in the numbers. So I've blown the dust off my Cyberpunk install and taken the Radeon RX 6900 XT and Radeon RX 6700 XT for a spin in-game to tell you whether it's all worth it.
First up, the Radeon RX 6900 XT. This is AMD's top card and an absolute stunner in rasterised performance. It's just a shame it's kind of pricey next to the RX 6800 XT and GeForce RTX 3080. Nevertheless, this GPU offers RDNA 2 at its absolute finest, and that means it's the flagship for red team ray tracing performance in Cyberpunk 2077.
My benchmark run is a quick lap around the Afterlife club, which has become a sort of picturesque place in-game for ray tracing capability. As such, you may find a little more performance out in the streets, or a little less when you get into heavy combat.
|4K - Ultra - RT off||44||28|
|4K - RT Medium||17||13|
|4K - RT Ultra||11||9|
|1440p - Ultra - RT off||85||36|
|1440p - RT Medium||34||25|
|1440p - RT Ultra||23||19|
As you can see in the table above, you're looking at a pretty significant drop in performance from the Ultra preset to the Ray Tracing Medium preset in-game. There are no ray-traced reflections at this grade, only shadows and 'medium' lighting, but even so it's tough going for even the top RDNA 2 card.
Ray-traced reflections, possibly the most clearly noticeable ray tracing effects of the lot, is introduced with the Ray Tracing Ultra preset. This setting sees a dramatic reduction in performance that creeps down to single digits at times.
Needless to say, neither inspires much hope for smooth framerates, not even 30fps, at 4K.
Perhaps that's a given. Even without ray tracing, the Radeon RX 6900 XT struggles to manage a steady 60fps at Ultra in 4K. 1440p is a much more admirable goal in that regard. The RX 6700 XT hits a decent average of 85fps with no ray tracing enabled, and nearly maintains a steady 30fps or more with ray tracing enabled, but not cranked all the way up.
It's still far from what I would call smooth going, however. It's a mighty reduction in performance for a handful of visual effects.
|1080p - High - RT off||102||39|
|1080p - Ultra - RT off||91||44|
|1080p - RT Medium||34||25|
|1080p - RT Ultra||24||19|
CPU: AMD Ryzen 7 5800X
Motherboard: MSI GODLIKE X570
Memory: Corsair Vengeance 32GB @ 2,666MHz
Storage: 1TB WD Black SN750
CPU Cooler: G.Skill 360mm liquid cooler
PSU: EVGA 850W G2
On to the Radeon RX 6700 XT and it's a similar story. I only ran this GPU at 1080p for the purposes of this testing, which is probably a small mercy on my part because the performance cliffedge of ray tracing puts this $379 in an already rather precarious position, even at the lower resolution.
There's one salve for ray tracing in Cyberpunk 2077, or at least potential salve, in FidelityFX CAS, or Content Adaptive Sharpening. This is a sharpening and upscaling feature built in to AMD's FidelityFX GPU suite that allows for sharpening within a scene based on the scene's details.
Why might CAS have an effect on performance? I hear you ask. Used in tandem with a dynamic render resolution—a resolution that is altered on the fly based on any given scene to maintain a steady fps—CAS can cut out some of the blurriness that would otherwise be introduced by rendering at a lower resolution.
This differs from Nvidia's DLSS in that it doesn't introduce new visual information into a scene, only alters what's there for clarity. AMD's planning a new feature in FidelityFX Super Resolution for just that purpose, but it's unfortunately not out yet.
So instead it's CAS that all our hopes rest, and unfortunately it's not an altogether pleasant experience in motion (it's a little tough to tell the difference between CAS on and off in screenshots, I'll admit). For one, and I assume this is a bug that will be fixed, CAS causes a flickering of some textures during gameplay on our test system when ray tracing is enabled.
It comes and go in any given scene, but is prevalent enough that you would most likely wish to disable the feature altogether, or disable ray tracing (which also puts an end to the flicker), simply to be rid of it.
Perhaps more importantly, CAS can also introduce a significant level of blur. The effect of CAS can be adjusted for performance and clarity, so the level of blurriness really depends on what sort of performance you're after and how far you're willing to go to chase that fps dream, but it's again a little less clarity in exchange for improved visuals elsewhere—it's always a trade off, and not necessarily a worthy one.
In my testing, it all felt too much blur to justify the extravagance, and perhaps frivolous, ray tracing implementation in-game. Cyberpunk 2077 is a visually impressive game that's only marginally improved by extensive lighting and reflections, and I found that during my time with it, even with ray tracing capability on AMD's cards, higher framerates were instead my primary focus.
It's great to see AMD back on even footing with Nvidia in Cyberpunk 2077 in terms of feature support, after what was a surprising launch day omission. Yet even post-ray tracing support, my recommendation is to keep this feature disabled, even if you spent a grand on the flagship RDNA 2 card.
So basically you cannot really run the game with Raytracing on any AMD GPU and get a useable framerate.
CAS is a ReShade Filter you can run on Nvidia GPUs.
This article might be interesting:
AMD need a DLSS equivalent.
There are some Nvidia RayTracing Performance numbers also.
I thought it was a well known fact even at launch that nVidia's RTX 3000 series are far superior in raytraced scenarios, while RX 6000 series fairly match them in traditional (raster) rendering. When I bought my 6900XT I knew and expected the RT performance levels at or indistinguishably close to 2080Ti/3070. This is well compensated by non-RT performance of the card. When you hear things like "6900XT may be on par or extremely close to 3090" it is assumed you have the basic informational background to derive logically, that it is followed by "in traditional raster rendering at native resolution (no cheats like DLSS allowed)".
Some benchmarks here:
Not sure I would call DLSS a cheat.
It's lower resolution at its core. The fact that it uses fancy tensor/AI algorithms to scale near-losslessly instead of classic bilinear (what every GPU ever does for up/down scaling traditionally) or more fancy, but still non-intelligent lanczos doesn't change that fact.
His FPS is way off for a 6900 XT @4k ultra in cyberpunk 2077. I get 78-90 FPS depending on situations in cyberpunk @4k ultra with no raytracing.
So I'm wondering why his FPS is so low.
Why is his FPS so low? I get 78-90 FPS in Cyberpunk 2077 @4k Ultra with HDR on. I have a 6900 XT, 5950X, 32 gb DDR4 4000 ram. On an NVME drive.
Big question. If it was ever so slightly lower, like, within 10FPS margin, I'd say 2666MHz RAM is to blame, but it's like WAY lower than what you say you have. No average bottleneck can slice a framerate in two.
SO despite FPS issues with the 21.3.1, and 21.3.2 drivers, I re-installed them just to try out the raytracing performance.
Ultra everything including ray tracing all ray tracing settings on, I was getting around 45-50 FPS @4k ultra. If I turn Ray trace lighting down to medium Everything else stays the same it goes up to 50-60FPS. If I turn ray-traced lighting off it goes back up to around 70-80.
Test system used is:
"To ensure our results are comparable, the Core i9 10900K is locked to a 5GHz all-core frequency and cooled by a 240mm Alpacool Eisbaer Aurora AiO which keeps the overclocked system at around 75C under full load. The 10900K is backed by an Asus Maximus 12 Extreme Z490 motherboard and two 8GB sticks of G.Skill Trident Z Royal 3600MHz CL16. Our games are run from a capacious 2TB Samsung 970 Evo Plus NVMe drive provided by Box. The whole rig is powered by a 850W gold-rated Gamer Storm power supply."
Just when GPU performance hit a historic high in performance where high frame rate gaming is now possible at ultra settings, 1440p or even 4k someone crashed the party. (Nvidia)
No longer being able to hide behind DX11 performance gimps Nvidia needed a new gimmick to differentiate themselves from the more formidable than ever AMD...thus RTX/DXR was created.
Aimed at artificially manufacturing a way to make hardware seem inadequate again DXR reinstates the illusion of a disappointing performance pyramid with Nvidia at the top and AMD lured into playing catchup. Raytracing is a fundamentally inefficient algorithm whose computation costs produces little to no visual advantage over countless much more efficient techniques including SSR, light mapping, SDFGI, etc.
RDNA2 on the other hand is an out standing architecture and the infinity cache is easily the engineering marvel of this generation boasting much higher potential performance advantage for game developers than Nvidia's expensive and inefficient scale-out architecture (amphere).
RDNA2 is not bad at raytracing, Cyberpunk and games like it are just very poorly written with little to no regard for performance. In fact with properly written code RDNA2 is architecturally superior to Amphere at raytracing.
To explain; the most important harware feature for DXR outside of having ray intersection accelerators is shader core performance. Rays need intersection calculations but they also need to be shaded via BRDF, phong, lambert or other calculations. Ray intersection calculations are much faster relative to rayshading because it utilizes hardware acceleration the later is the slowest and has the biggest performance hit as it uses shader cores. Amphere has single precision FP32 shader performance that is nearly double that of RDNA2 which more than explains the performance differences in DXR games. This advantage is only because game developers use single precision FP32 calculations to do rayshading. RDNA2 actually has a much bigger advantage in shading performance than Amphere in that RDNA2 has more that double the half precision FP16 performance than amphere. So if game developers would just switch from FP32 to FP16 in the HLSL shaders to do rayshading the performance of the RX6000 series GPUs would sharply increase by over 100% in extreme cases.
Also, its totally safe to do rayshading at FP16 half percision. It doesn't cause artifacts and looks no different from FP32. I am a graphics developer and I use FP32 in all my bounced lighting calculations including BRDFs, phong shading, diffuse shading, transparency with refraction, etc.
That is all very interesting.
It does not change the fact that many people are really interested in RayTracing and find RayTracing + DLSS at "4K" acceptable visual quality and frame rates.
Versus the top end AMD RX69/800XT GPU (which is rarer than dragon teeth) unable to perform with Raytracing on at 4K at acceptable frame rate in latest AAA Titles.
I own a couple of Palit RTX2080 GamingPro OC GPUs, and it seems like they are at ~ similar Raytracing performance to RX6000 series GPUs , w/o using DLSS at all.
RTX2080 was launched ~ 2.5 years ago.
I got both of those GPU for <cost of an RX6800XT today.
The "marvellous scandal" is AMD did not do better and have no answer to DLSS for RX6000 series GPUs yet want to charge just as much.
The even bigger "marvellous scandal" is the terrible state of the "GPU Compute Environment" - none on Windows and ROCm on Ubuntu Linux. If you are into GPU Compute - Nvidia CUDA or OpenCL is currently the only sensible option for a small Developer/Research Team. You can actully spend your time programming and not have to spend your time on dealing with installation issues and new/recent GPUs that are unsupported on the AMD OpenCL "compute environment".
Blender GPU performance is another area where AMD is weak versus Nvidia and again, latest AMD GPUs have problems.
When I say latest I mean RX5700XT and RX 590... Goodness knows what the reality is with RDNA2.
If you do not care about RayTracing and if you do not care about a supported Compute Environment and you do not care about Blender, then fine. Go AMD for your GPU.
Those AMD RX68/9000XT GPUs designed for 2K/4K gaming do have slightly higher insanely high FPS at 1080p versus those Nvidia GPUs at that resolution.
A number of GPU Reviewers have stated similar things to me about the AMD RDNA2 GPUs.
Nvidia are still on a slower / lower performing Samsung process.
It is probably best to wait to compare how Ampere perform on a better Samsung or TSMC process or wait for next Nvidia generation than go near those Nvidia RTX3000 series GPUs either.
The GPU market is crazy at the moment and both AMD and Nvidia latest high end GPUs cost far too much.
Right now I am staying with the RTX2080 GPUs and RX5700XT cards I own.
I refuse to get taken in by Marketing hype and insane pricing and false promises from both AMD and Nvidia.
Sneak preview of next Nvidia RTX4000 series GPU here:
It seems Nvidia have moved from Samsung 8nm to a new Intel process.