For the last few years, there’s been an ongoing debate about the benefits and advantages (or lack thereof) surrounding DirectX 12. It hasn’t helped any that the argument has been bitterly partisan, with Nvidia GPUs often showing minimal benefits or even performance regressions, while AMD cards have often shown significant performance increases.
[H]ardOCP recently compared AMD and Nvidia performance in Ashes of the Singularity, Battlefield 1, Deus Ex: Mankind Divided, Hitman, Rise of the Tomb Raider, Sniper Elite 4, and Tom Clancy’s The Division. Bear in mind that this was specifically designed as a high-end comparison that would compare the two APIs in GPU-limited scenarios at high resolutions and detail levels, and with a Core i7-6700K clocked at 4.7GHz powering the testbed. The GTX 1080 Ti was tested in 4K, while the less-powerful 1080 and RX 480 were tested in 1440p. Before you squawk about comparing the GTX 1080 and the RX 480, keep in mind that each GPU was only compared against itself in DX11 versus DX12.
Why DirectX 12 hasn’t transformed gaming
A few years ago, when low-overhead APIs like DirectX 12 and Vulkan hadn’t been released and even Mantle was in its infancy, there were a lot of overconfident predictions made about how these upcoming APIs would be fundamentally transformative to gaming, unleash the latent power in all of our computers, and transform the gaming industry. The truth, thus far, has been more prosaic. How much a game benefits from DirectX 12 depends on what kind of CPU you’re testing it on, how GPU-limited your quality settings are, how much experience the developer has in the API to start with, and whether the title was developed from the ground up to take advantage of DX12, or if its support for the API was patched in at a later date.
And the components you choose can have a significant impact on what kind of scaling you see. Consider the graph below, from TechSpot, which compares a variety of CPUs while using the Fury X.