Just when GPU GPU rasterization performance hit a historic high where high frame rate gaming is now possible at ultra settings, 1440p or even 4k someone crashed the party. (Nvidia)
No longer being able to hide behind DX11 performance gimps (tessellation abuse, draw call abuse, driver overhead) Nvidia needed a new gimmick to differentiate themselves from the more formidable than ever AMD...thus RTX/DXR was created.
Aimed at artificially manufacturing a way to make graphics chips seem inadequate again DXR reinstates the illusion of a disappointing price/performance pyramid with Nvidia at the top and AMD lured into playing catchup. Raytracing is a fundamentally inefficient algorithm in that the computation costs of realtime implementations produces little to no visual advantage over countless much more efficient techniques including SSR, light mapping, SDFGI, etc.
RDNA2 on the other hand is an outstanding architecture and the 128MB infinity cache bandwidth amplification is easily the engineering marvel of this generation boasting a much higher potential performance advantage for game developers than Nvidia's expensive and inefficient scale-out architecture (amphere).
RDNA2 is not bad at raytracing, the reletively poor raytracing performance as seen in games like Cyberpunk and Control are due to poorly written shader code with little to no regard for architecture or performance. In fact with properly written code RDNA2 is architecturally superior to Amphere at raytracing.
To explain; the most important harware feature for DXR performance outside of having ray intersection accelerators is shader core performance. Rays need intersection calculations but they also need to be shaded via radiance calculations like BRDF, phong, lambert, etx. Ray intersection calculations are much faster relative to rayshading because it utilizes hardware acceleration the later is the slowest and has the biggest performance hit as it uses shader cores. Amphere has single precision FP32 shader performance that is nearly double that of RDNA2 which more than explains the performance difference in DXR games. This advantage is only because game developers use single precision FP32 calculations to do rayshading. RDNA2 actually has a much bigger advantage in shading performance than Amphere in that RDNA2 has more that double the shader performance of amphere whe using half precision FP16. So if game developers would just switch from FP32 to FP16 in the HLSL shaders to do rayshading the performance of the RX6000 series GPUs would sharply increase by over 100% in extreme cases.
Also, its totally safe to do rayshading at FP16 half percision. It doesn't cause artifacts and looks no different from FP32. I am a graphics developer and I use FP16 in all my bounced lighting calculations including BRDFs, phong shading, diffuse shading, transparency with refraction, etc.
AMD raytracing performance on RX6000 series are where Nvidia are with previous generation Turing GPUs launched over 2.5 years ago.
AMD still has no answer to DLSS on top of that.
FidelityFX Super Resolution is still not here.
Keep seeing articles about it not needing to use AI somehow.
Maybe it will use AU.
It is funny to know that the same pro AMD GPU people on this forum were laughing about Nvidia RTX 2080/2080TI RayTracing performance when those cards launched, yet RX6*00/XT Ray Tracing performance is somehow acceptable today, nearly 3 years later, on those old games like BFV, Metro Exodus, and Shadow of the Tomb Raider.
AMD RX6*00/XT GPUs are unable to cope with RayTracing on latest AAA titles like Cyberpunk and Control.
Maybe RDNA3 will be better.
You sound a bit biased against Nvidia GPUs to me.