For the last few years, there’s been an ongoing debate about the benefits and advantages (or lack thereof) surrounding DirectX 12. It hasn’t helped any that the argument has been bitterly partisan, with Nvidia GPUs often showing minimal benefits or even performance regressions, while AMD cards have often shown significant performance increases.
A few years ago, when low-overhead APIs like DirectX 12 and Vulkan hadn’t been released and even Mantle was in its infancy, there were a lot of overconfident predictions made about how these upcoming APIs would be fundamentally transformative to gaming, unleash the latent power in all of our computers, and transform the gaming industry. The truth, thus far, has been more prosaic. How much a game benefits from DirectX 12 depends on what kind of CPU you’re testing it on, how GPU-limited your quality settings are, how much experience the developer has in the API to start with, and whether the title was developed from the ground up to take advantage of DX12, or if its support for the API was patched in at a later date.
And the components you choose can have a significant impact on what kind of scaling you see. Consider the graph below, from TechSpot, which compares a variety of CPUs while using the Fury X.
true that... but as a note as some game engines dont utilize dx-12 very well, but dx-12 may not have extreme fps in junky game engines it does have a better draw distance visual improvement over dx-11. i am sure new games have alot to overcome with so much new tech architecture happening and core count usage. the game industry will need to adopt and will as time goes, ofcourse game programmers are slow. and the world is moving faster and faster.!