cancel
Showing results for 
Search instead for 
Did you mean: 

General Discussions

rainingtacco
Challenger

AMD when will you fix your terrible driver overhead for DX11?

You really want us to play newest games with DX12, dont you? 

52 Replies
leyvin
Miniboss

What terrible Driver Overhead in DirectX 11?

Are you just assuming that the poorer performance in DirectX 11 is from Driver Overhead?

While the Dx11 Driver Overhead is typically greater than competing NVIDIA Cards., it's perhaps important to keep in mind that GCN was designed with "Next Gen" (at the time) Graphics APIs in mind...

This results in some necessary Driver Wizardry to get it to execute in a way the Architecture likes., and you can never really hit the same level of "Optimisation" to fully utilise the Hardware.

NVIDIA on the other hand well their Architecture was always designed for said Older Graphics APIs., it still realistically is. This is why even the "Latest and Greatest" NVIDIA Hardware doesn't place nice with what have become the current Graphics APIs.

So that has been a bit of a role reversal that has occurred.

AMD simply bet on DirectX 12 / Vulkan being adopted much quicker (2015/2016 when they were first released) rather than 4-5 years down the line. 

It's always going to be a trade off, which if you don't like it... just get an NVIDIA Card for Older Games., and use your Radeon for newer games. 

0 Likes

Its over 10 years since release DX11, new games are released still on DX11, and AMD don't want to fix their drivers. AMD doesn't have better DX12 driver, despite pouring resources into it. It just doesn't suck like their DX11 API, so that's why it looks good in comparison. 

You know what's actually funny? The worst combination of CPU and GPU is actually both amd cpu and amd gpu. Slower draw calls in amd cpus, paired with driver overhead that puts more strain on the cpu[especially single thread] and you get a much worse performance in DX11 games. I really laugh at AMD fanboys defending this. I was too duped with their weasel words and bought amd cpu AND gpu. Never again, next time im doing my research better before i buy GPU and CPU, and not just look at meaningless performance graphs...

0 Likes

Its over 10 years since release DX11, new games are released still on DX11, and AMD don't want to fix their drivers.

DirectX 11 should've been retired in 2015. 

Developers are technically still using DirectX 11., however it is important to keep in mind that they're not all using the SAME DirectX 11.

Now this can get a little confusing and complicated but right now, DirectX 11.2 and 11.3 are used.

In order to explain just how different... I will have to summarise how the Graphics APIs and Drivers Interact.

So there are two types of Graphics API.

Hardware Abstraction Layer (HAL) and Low-Level Virtual Machine (LLVM)

In essence HAL APIs are Implicit by Nature... you're more making suggestions of what you want the end result to be, then the API and Drivers "Fill in all the Steps" before telling the Hardware.

LLVM on the other hand are Explicit by Nature... all they're doing is more-or-less directly translating what you're telling the Hardware into it's Native Language.

Now if we take something, like Multi-Threading as an example here... this actually tends to work very poorly in an Implicit API. Why? Well, consider what Multi-Threading actually is.

You're creating a very large (albeit repeatable) workload, and what you want is the Hardware to break down said workload into smaller tasks and delegate it to all of the available Workers (Cores).

This is fine, if for example the Hardware actually has a Manager for such (i.e. GigaThreading or Hyper-Threading) but when the Hardware doesn't... well then it falls on the Driver to translate said Implicit Threading (i.e. "I want to you delegate this task to all available Threads") into Explicit Threading.

Now DirectX 11.2 does add support for such., but the thing is that it's added as an Extension... so the API itself was never designed to be Multi-Threaded., thus it can only really Thread "Big" Jobs that would make more sense to Thread to Multiple Graphics Processors; as opposed to Thread within a single Processor.

If we take GCN for example., well it's Compute Units are in clusters of 4 to create a "Core"... but it lacks a Thread Manager., so this is the smallest / largest workgroup that can be assigned.

Where-as if we look at any Unified Shader Architecture (Maxwell to Ampere)., these have a Thread Manager; so it can break down the assigned work to the individual Cores more efficiently.

As a result., while both Architectures do benefit from Multi-Threading in DirectX 11.2... you're simply going to be under-utilising any GCN Processor comparatively speaking.

DirectX 11.3 changes this, as it was re-written to be Multi-Threaded; this means that it dispatches smaller workloads that now GCN can better utilise., while the benefits for USA are minimal.

DirectX 12 takes this a step further, because it's Explicit... you as the Developer can keep tabs on every Graphics Cores "In-Flight" and Dispatch whatever Workloads you want, when possible to said Hardware.

This means the Hardware can potentially be used very close to 100%., provided the Programming Team / Engine is making the most of this approach.

The same is true regarding other aspects., because we're not Explicitly being translated to the Hardware as opposed to Phrases being translated (which on GCN would often then have to be further translated by the Drivers in order for it to make sense to the Hardware)... well this as you can imagine reduces overhead and gives us much more control over what the GPU is doing at all times.

• 

A key point to note however, is that DirectX 11.2 and 11.3 are DIFFERENT APIs., even though they will always be listed as the same API.

It's why for example Resident Evil VII / II Remake / III Remake have such impressive DirectX 11 performance., in fact when they first launched it was better than DirectX 12... it's because it's using DirectX 11.3 NOT DirectX 11.2.

This has the benefit of still being Abstracted (Implicit) by nature, making it easier to program; as you don't necessarily really need to be aware of what the Hardware is doing; while many of the performance improvements added to DirectX 12 are still present (in a limited fashion). 

You can of course still squeeze better performance via optimisation and hardware utilisation from DirectX 12., and this is especially true in regards to the 1% Low Frame Times. There are also various things you can do that are simply not possible via an Implicit API, which DX11.3 is still using; but generally speaking you can get very similar performance "Out-of-the-Box" on any Hardware on it.

Now why doesn't AMD fix this? Simple, they can't. 

GCN was developed as a DirectX 11.3 (DirectX 12) Architecture... it works in a way and lack some key hardware elements that rely on more Explicit Control to make the most out of it.

While AMD could of course add to the Drivers to improve how Command Buffers are translated to the Hardware to better utilise it... this would come at the cost of more Driver Overhead / CPU Utilisation.

Conversely, making it more lightweight... means that the Hardware (GPU) Utilisation would drop and you'd loose performance.

As it stands the Drivers are about as good as they're going to get, to provide that balance between Overhead and Utilisation to provide the best performance possible on DirectX 11.2.

You know what's actually funny? The worst combination of CPU and GPU is actually both amd cpu and amd gpu.

A big part of that is Developer Support. 

If Developers utilised the AMD SDKs, such-as AGS more often... then you'd find the utilisation of unique features to AMD Hardware actually ends up quite beneficial. 

That said... AMD can't FORCE Developers to utilise their SDKs., and given most will opt to use Gameworks (NVIDIA's SDK); well this situation generally gets worse as those are explicitly designed to NOT utilise anything that would allow favourable performance. AMDs on the other hand are more Hardware Agnostic, and being open source can be extended / improved to support Native Hardware "Features" should a developer chose to.

You can't do that with Gameworks, as they are what is known as a "Black Box" Solution; you have access to the API to utilise them... not the actual library code that performs said tasks.

Slower draw calls in amd cpus

Part of that is that AMD CPUs simply have Lower Clocks than their Intel Counterparts., which can be offset by proper support by Developers... but few are willing to put in the extra time and resource to make that happen. 

Keep in mind that MOST "Commonly used" Libraries., are optimised for Intel Processors... but aren't for AMD Processors.

You ever wonder how Console Developers were able to squeeze clearly better performance out of what was a terrible Architecture compared to PC? It's because they took the time to optimise it, as they had little choice other than to do so.

On PC, most Studios tasked with Porting rarely bother. 

paired with driver overhead that puts more strain on the cpu[especially single thread] and you get a much worse performance in DX11 games.

DirectX 9.0 - 11.2 Games., but even then the Driver Overhead actually isn't really a particularly big bottleneck.

What can give a false impression is how NVIDIA Graphics Cards (typically) run better on Intel Processors., but that's misleading... the reality is that NVIDIA are quite petty.

Back when Ryzen first launched., there was an oddity that Tech Reviewers kept drawing attention to; where-in AMD GPUs were running better on Intel Processor (which makes sense, as they had MUCH higher Frequencies and lower Latency) while NVIDIA GPUs were running better on AMD Processors... 

Why did this happen? Well because NVIDIA was having a disagreement with Intel., so they removed the Intel Optimisations and add Ryzen Optimisations in their Drivers. 

Right now, NVIDIA is "Concerned" with AMD' rising Market Share and potential Dominance., so we've seen this situation reverse again.

Even still., this isn't the main reason for the performance difference.

Driver Overhead accounts (at most) for maybe a 2-5% Loss in Performance and this is a "Worst Case" scenario., typically it's less than 1%... meaning it's unimportant for anyone other than Developers.

Architecturally however., you can see up to a 45% Performance Difference on GCN in DirectX 11.2 (or earlier) because that's the peak hardware utilisation that can be achieved. 

It has very little to do with the Drivers and everything to do with how the Architecture and Graphics API work., specifically to what you (as the Developer) are trying to get it to do for your Engine. 

Typically you can dramatically shrink said performance gap by taking a different approach, but again... that would take more time and resources. Most Developers don't see it worth it given AMD' Market Share., not enough End-Users are going to be affected to warrant doing it.

 I really laugh at AMD fanboys defending this. I was too duped with their weasel words and bought amd cpu AND gpu. Never again, next time im doing my research better before i buy GPU and CPU, and not just look at meaningless performance graphs...

And I find those who spread misinformation as fact due to how grossly uninformed they are to be exceptionally frustrating., as it give a false impression of AMD and an unrealistic sense of what they can do to improve their Hardware Performance; as if they just can't be bothered to.

Keep in mind that ALL GCN Hardware has seen up to a 15% Performance increase from Driver Optimisation since they were launched. Just because this isn't "Good Enough" for you, doesn't mean they can't be before to resolve something that YOU personally perceive as a Software Issue; which is in-fact a Hardware limitation.

You say you were "Duped with their Weasel Words" ... but what exactly did AMD say that had you "Duped"?

AMD has never claimed incredibly DirectX 11 performance. 

All I've seen them showcase is the improvements to close the performance gap in popular titles as best they can... and now Architectural improvements have provided such.

 

Performance Graphs typically do showcase a good indication of how a Game will play., but it does of course depend on what Combination of Hardware you chose.

Take 1% Lows for example... well Low-End CPU / APUs tend to lack or have greatly reduced Level 3 Cache; this translates to Micro-Stutter in A LOT of Games.

Again we're talking a Hardware Issue, not a Driver Issue. 

Developers can work around this... but again most chose not to, which is odd given most people have "Low-End" Hardware.

As it stands you're just blanket blaming AMD here., and specifically their Drivers. 

While, I'll fully admit that they've had some stability issues the past few Generations (Polaris, Vega and Navi)... performance issues typically are VERY rare occurrences within AMD Drivers.

You entirely focus on blaming Drivers Overhead., well I don't even know where this could've come from; especially if you're "not looking at meaningless performance graphs"., as the Driver Overhead Benchmark in Futuremark, showcases just how small the overhead is on all Drivers today.

But without such., you have ZERO basis for your arguments.

Provide the Data to backup your claims, otherwise these are just baseless claims.

Hi @leyvin 

You give informative information, but I have almost always seen you dismissing other AMD user's posts. This doesn't help in improving the AMD experience, people are complaining about these issues because they want a better experience on AMD, and if it is dismissed it will not be improved upon.

It doesn't help to get too technical on the problems and take on other people's knowledge or authenticity, when someone talks about driver overhead, it can mean a lot of different things for that person, but it comes down to the under-utilization of one's hardware.

And I understand that there are hardware limitations in GCN that has to be specifically developed for, but there has been cases where there has definitely been unacceptable issues introduced or reopened.

I have given a lot of proof how Unreal Tournament 3 has had a 50-60FPS hit in low FPS scenarios going from 16.6.2 to any newer driver causing 29FPS gameplay in said scenarios where it was at 89FPS. Now this might have been due to a Hotfix or Optimization removed, but why do that? At least make the optimization available for download if it doesn't fit the current specification for people who want it.

Furthermore, if I have not reported shadow bugs in RAGE (2011 game) it would not have been fixed, and thankfully to the OpenGL Team it has been fixed, the fact is this was also a issue/problem persistent by the driver; maybe it wasn't the drivers fault; but it has been fixed nonetheless which is necessary for gamers to have an enjoyable experience with AMD.

And I understand with different versions of DirectX11 also OpenGL, but DirectX12 doesn't always fair better CPU-wise if they don't make things asynchronous enough; good example, Total War Warhammer and Serious Sam 4 where I have slightly better FPS and stability in DX11.

Just now I got 50FPS minimum average with GTX 1060 vs 30FPS minimum average RX480 in Sniper Ghost Warrior Contracts which is one of the newest DX11 games.

Some hack or wrapper must be made for AMD users; even as a 3rd party option or a hotfix download. It is definitely not all blame on architecture since some of the same games perform better on the same XBOX PS4 architecture with higher graphic settings than on PC. Good example, Wolfenstein The New Order and Old Blood (OpenGL games) at 60FPS.

I know if a game has been coded in a certain way a Wrapper doesn't help a lot, but Nvidia made a hack in DX11 for their architecture with Command lists and Context Deferring even if some games didn't support it, AMD must be willing to do something similar with their Compute Units to better disperse load on the GPU or however a hack/wrapper would be able to improve utilization even if it doesn't conform to spec.

Because otherwise it's just a blame game, blame it on the Game Devs or blame it on the driver devs, someone must do something, they should have teams testing for these Quality LOW FPS scenarios in gaming and find consensus as simple as that.

It doesn't help leaving the knowledge battles between the end-users of "who knows the flaws and reasons better than eachother". It's not about who has more knowledge, but consensus, resolution is NEEDED. Ask the community to help create hacks in the driver and oversea it by open-sourcing some things.

Kind regards

In response to my comment just now, according to my 3DMark tests AMD still has a lot more DX11 overhead than my GTX 1060, but this is not the point, because some old DX 9/11 games fly to the sky with AMD and other is a horrible mess, like I said, find consensus and introduce hacks/hotfixes or open source help from the community and make it stable.

I know this is easier said than done, but it is the best option to find a workaround.

0 Likes

The resolution is simple... hold DEVELOPERS feet to the fire on Performance Issues.

You know how and why NVIDIA can resolve such issues as quickly as they can through various hacks driver side?
It's because they have several DEDICATED Software Engineering Teams with over 250 Engineers (oft head-hunted because they're the best in their fields) and focus on hacking in AS MUCH performance without being caught that they're essentially cutting corners and getting various things to run in a way for optimal performance over the API doing what it's supposed to.

And heck they've been able to hide even more since "convincing" large numbers of Development Teams to use their Black Box Gameworks Technology. 

While things might've changed., AMD conversely speaking has had an ENTIRE Software Engineering Team of < 35 who have to wear many different hats; providing the Linux, Windows, OpenGL, DirectX, SDKs, Professional Compute, etc. 

I mean NVIDIA has an Agile Team dedicated to JUST Bug Fixing that's twice as large as AMD's entire Software Team. 

A major reason this situation even exists today is because NVIDIA was able to cultivate the modern Development mentality (on PC) of "Eh, the Drivers or Hardware will fix any performance issues" ... I can't even imagine having such a mentality., as I cut my teeth in a era where instead of 2 Competing Hardware Giants; there were half a dozen each with their own Graphics APIs specific to them... with OpenGL and DirectX as more "Fallback" options that would guarantee things worked, but weren't the best option for performance; even then "Worked" is subjective as you'd still need to know what a specific Card DID and DIDN'T Support as there was no requirement to support Specific Features of an API to claim it was supported.

The reality was we HAD to KNOW what the Hardware was doing and capable of... and from there I moved on to Consoles., where sure, it's a more static specification but there were no "Upgrades"; what you had available was IT., so again you NEED to learn how the Hardware worked inside-and-out to actually achieve what you want from it.

I mean on Console, if something doesn't run well... it's rarely the Console' Fault., although I saw with the 8th Gen the same excuses I'd seen from PC Developers over the years "Oh this doesn't support X..." or "It doesn't work the way I want it..." and that's why things were lower resolution or not seeing the performance expected.

Most of the Performance Issues on say the Xbox One, stem from the CPU; and more or less those come from Developers ignoring that it has support for Compute... and using THAT for Floating-Point instead of the FPU itself (which is crap, with a small instruction buffer and was only added to the K15 Architecture as a Fallback, not a primary for FP Operations) 

And again keep in mind MOST of these Development Companies have 10x the Software Engineers that AMD has... Epic has 3 Teams of ~70 Software Engineers; many of whom are supposed to be some of the best in the Industry; and yet it took them what almost a Decade to get Unreal Engine 4 into a "Good" Place., and they still don't have a Full DirectX 12 Implementation.
Despite the fact that it was Epic themselves who **bleep** near wrote the list of NEEDED features for the LLVM Graphics APIs (like Dx12 and Vulkan)... then they proceeded not to use any of them., until NVIDIA suddenly needed them to.

Unreal Engine 3 they essentially BROKE and left for Dead instead of going back and fixing what they broke with it. 

Why? Well because they don't get any backlash when things break in Unreal Engine... no instead people launch themselves are Reddit or the Hardware Forums to cry to AMD / NVIDIA that their Drivers are broken and need fixing.

Back in 2010., AMD made the decision to REMOVE not just support for Terascale but a substantial amount of the Bloat that essentially stemmed from over a Decade or Hotfixes, Hacks, Patches, etc. to resolve the MISTAKES of Game Developers... and while as a Gamer; sure I'd like to have various fixes to play older games on my modern Hardware without issues., as a Developer I fully understand and AGREE that you can't just keep hacking together something; at some point you HAVE to rewrite everything and go back to a clean base, because otherwise you end up with an entangled mess that is near impossible to maintain and you end up breaking more than you can fix. (see: Windows)

YES, that sucks for End-Users... but it's necessary to do. 
I mean the Driver alone (none of the additional software like Catalyst Control) was 350mb... and why? Because DEVELOPERS aren't fixing their OWN games. 

Which as it stands AMD spends a majority of their time resolving meaning ACTUAL Bugs with the Drivers, Control Software, etc. take MONTHS to see fixes for as they're so low on the Priority List. 

Could AMD fix your favourite games? Perhaps, but why should they have to if the issue ISN'T with their Hardware or Drivers; but instead with the Game Engine? Why aren't you complaining to Epic or whatever Developer to FIX their own stuff? It's not like they don't have the available Software Engineers; or wouldn't be able to afford the Test Hardware. 

0 Likes

@leyvinthank you for your response.

Firstly, out of the XBOX One and PS4 games we own, we haven't experienced any performance or graphical issues, but we experienced some performance issues on PS3 which was due to their unique CELL processor design similar to AMD on PC, but again for some reason AMD owns on console; same game developers, same driver developers, more difficult API's apparently, better performance.

The assumption that I do not report to game developers is completely wrong, who are we as END-USER supposed to contact if we do not know who will FIX the issue, that is the point. Don't stop people from REPORTING ISSUES. We have to do it somewhere since EVERYTHING is a BLACK-BOX to us. That's why I suggest if AMD GIANT apparently still has less employees than most of the businesses in the rest of the world, which is always the reason, to OPEN SOURCE the drivers and dedicate someone to oversee COMMUNITY FIXES to prevent MALICIOUS SHADERS.

Furthermore, I don't mean they should MAKE a MESS of HOTFIXES, naturally they should still keep the CORE STABLE DRIVER and Interchange it with a FIX/WRAPPER/SHADER (which can be optional download) during run-time or scenario helps bring a resolution.

Once again I agree game developers should work more efficiently, but you @leyvin  are always "writing at least" the assumption that it is never AMD side that the issue occurs, when in fact I remember now that I also reported to AMD that Wolfenstein II The New Colossus had bright light in a room full of water where nothing was visible and they fixed it; from my view-point they might've reported it to MachineGames or maybe AMD hacked it, who knows? How would we know? Where should we post issues then?

Another fact is, it is not feasible to upgrade components every two years just hoping that architecture will resolve it, one I experienced the last 6 years with AMD with MINOR success and it is a very disturbing and back-setting experience. From what I can tell, you regularly seem to have the latest hardware components from AMD and thus you won't be able to pickup such performance disturbances as easy unless you go looking for it and maybe you are afraid that improving the general performance of AMD users will affect you at the Enthusiast end. Remember some of us also paid A LOT for these systems and to have them underutilized is not a solved by comparing eachother's knowledge about the history of the components, Nvidia also has their own architecture that they have to "hack" to perform well even though it might not even be a better architecture. In addition, very little games today use NVidia Gameworks, I would say from may experienced throughout history maybe 40% of games used Gameworks.

From my perspective, GPU Driver developers have a lot of control over how performance can playout, since Graphics APIs are mostly specifications and the GPU Driver developer can manipulate the data coming through that specification to disperse over the Compute Units, so why let the API hold you back? The API is also only a Black Box to game developers like a pipeline data has to be fed through, so in most cases it is probably only feasible to develop the game in one way irrespective of the system that has to be fed to the same API irrespective of the system. Therefore, what happens on the other side of the specification cab be best resolved by AMD or THE COMMUNITY (if power were given to them like on OpenGL Linux).

You make it sound like you were/are part of AMD or some Console Team? Is that why you have a better view-point than everyone else?

Kind regards

0 Likes

To add to my comment just now, I have also experienced some Unreal 3 games performing very well on AMD, such as Rainbow Six Vegas, Thief (although I was mainly testing it on Mantle, but at start it seemed to perform better in DX11(on that note BF4 definitely runs better in DX11 from what I remember)). I had Splinter Cell Blacklist performing insanely well on my AMD system, although that was Unreal Engine 2.5 with DX11, so it is impossible for us to point fingers; I have Dead Space 3 running in the 100FPS with Max 1080p settings and that is on DX9, but much older Unreal Tournament 3 struggles to hit 29FPS in some locations at lowest 640x480 settings on my Radeon so who is to blame now? Why can't they let the community or someone help resolve issues like this?

0 Likes

Firstly, out of the XBOX One and PS4 games we own, we haven't experienced any performance or graphical issues, but we experienced some performance issues on PS3 which was due to their unique CELL processor design similar to AMD on PC, but again for some reason AMD owns on console; 

Well you're talking about a subjective experience.
Objectively most XB1 Games will run at Lower Resolutions (720p/900p Vs. 1080p) and typically struggle to hit 30 FPS Targets compared to their PS4 Counterparts... but then the PS4 Version this past generation was typically the "Root" Version that Ports would then be based from.

Now the CELL Processor was notoriously difficult to use, but this initially was because the Platform Development Kit from Sony was terrible., which made it more difficult to use... combine this with Developers being unfamiliar with Compute-Style and Asynchronous Multi-Threaded Programming (both being in their infancy at the time) and well this is why the PS3 gained the reputation it did with games rarely capable of taking advantage of it.

And yes, this is technically similar to AMD's K15 Architecture but I doubt that's exactly what you meant but still a continued problem to this day is that Developers just DO NOT use the Compute Element of CPUs to offload Floating-Point Operations ... despite said components typically offering 4 - 8x the Performance in Like-for-Like Operations with better support for Multi-Threading and Asynchronous operations.

With this said, that's NOT why AMD ended up being used exclusively by Sony / Microsoft for their Consoles the past 2-3 Generations... it's because NVIDIA destroyed their relationships with said Companies and as a result almost entirely cut themselves off from the lucrative Console Market. 
As it stands we'll have to wait and see if Nintendo sticks with them, but given the Switch Pro doesn't have any updated chips and that Nintendo gets them "Off-the-Shelf" rather as a Semi-Custom Solution speaks volumes. 

same game developers, same driver developers, more difficult API's apparently, better performance.

Yeah, this is incorrect.
It's unusual for the same Developer to work on every Platform a game is released on., and what tends to happen is a version is created for a "Primary" Platform (PS4 in the last generation)... this is then given to a 3rd Party Studio, which then Ports said Primary Version to another Platform. 

Drivers are also different, as Windows doesn't exactly provide "Direct" Hardware Access per se... you not seen me complaining about HDR and HDMI Support on Windows 10? Yeah., a lot of that isn't because the AMD Drivers are "Incapable" of it but because Microsoft (Windows) doesn't allow the Drivers that level of Direct Access to the Hardware.

On Consoles however., well on the PlayStation the Driver is essentially built-in to each Game; so more-or-less if a Developer wanted they could have a bespoke version of the Graphics Driver that best suits their Game... where-as on Xbox, it's a universal driver but there is a difference between the Game OS (that Games use) and the Windows OS (which the Dashboard uses).

The Game OS is essentially stripped down to JUST what games need., which includes DXGA being a more "Minimal" version; this allows various Low-Level Features being handled by the AMD Driver than the Windows Interface; hence better HDR Support, better Latency, etc.

This said the API isn't "More Difficult"., they're more or less the same as their Desktop Counterparts.
In fact if you're working on PlayStation; it's IDENTICAL to developing a Linux Game. Such however is more work, hence why Linux Games are uncommon on PC... as while you can potentially get an extra 10-15% Performance squeezed out, it takes longer and is more work; esp. to make it Compatible with a large number of Configurations.

And yes, it can be better performance but again; that's only compared to the amount of work and optimisation the Developers are willing to put in.

In regards to Xbox One / Series Vs. Windows 10 PC... there really isn't much more optimisation that can happen that you can't do on PC. Yes, performance is typically better on Xbox than the PC Port; but again this is entirely due to Developer Effort NOT any fundamental difference in Hardware / Drivers / etc. that offer a clear advantage.

The assumption that I do not report to game developers is completely wrong, who are we as END-USER supposed to contact if we do not know who will FIX the issue, that is the point.

Maybe you do also report issues to Developers., but here's the thing... do you see that as a common thing?
I look through Game Forums, Reddit, etc. and sure you'll see posts complaining about Performance or Issues; but typically the response is "Update your Drivers" or "Report it to AMD/NVIDIA" and that's even the case from the Developers themselves.

And the things is., sure both AMD / NVIDIA will usually "Fix" the Issues; but keep in mind, they have NO ACCESS to the Source Code for that game... they have to slog through Performance Metrics., and look at what's happening Low-Level on the Hardware / Drivers, to then hack in some fix to by-pass and in some cases actually build NEW solutions that SHOULD be done via the Game Engine.

Why? Because Developers see such issues as "Minor" Problems affecting a "Small" Number of people and not worth their time... so what., it's worth the time of the smaller teams of Software Engineers who have to put in WAY more time and take them away from ACTUAL issues with the Drivers? 

Heck it must me so soul crushing when one of the Engineers spend a Month trying to fix a Performance Issue that arose from a Fortnite Update; only for the very next Patch to introduce a NEW Performance Issue or worse undo all said work.
And I use that as an example because it happens with SUCH regularity that it's ridiculous.

Developers barely talk to AMD / NVIDIA... and they certainly aren't held to account when THEIR Games run like crap. 
Bigger noise NEEDS to be made on Developer Forums., more people need to be involved in making them actually do something... and look at it like this., let's say there IS and issue with the Drivers; well I don't know why OTHER Developers don't notice earlier and report said issues to AMD / NVIDIA and work with them on Solutions.

It's something I do, but I'm just a small independent developer today... I no longer have the backing of a bigger Studios / Publishers that I've worked for in the past; and yes on Console when we said "This is not working as it should" typically it would get fixed pretty quickly; and the best part being is that then that would work and have the performance it should OR you're given an alternative because what you're trying to do is just not well suited for the Hardware.

Heck AMD produces the largest number of Whitepapers on "Best Practises" using their Hardware... they're not long or difficult reads to use the APIs effectively from the START rather than when an issue appears.

That's why I suggest if AMD GIANT apparently still has less employees than most of the businesses in the rest of the world, which is always the reason, to OPEN SOURCE the drivers and dedicate someone to oversee COMMUNITY FIXES to prevent MALICIOUS SHADERS.

AMD is not a "Giant"... sure, they're profitable now and I'm hoping that new found profitability is being invested wisely; such-as more Software Engineers; but thinking the Solution is Open Source is naïve. 

But that said., AMD publisher their ISA Openly (unlike NVIDIA., which requires a rather costly License agreement to access) and the Microsoft Developer Development Kit is packaged with Visual Studio; so what is stopping industrious programmers from producing a 3rd Party Driver? Nothing.

I mean years ago we DID have 3rd Party Drivers (Omega) for ATI., and that's when it was a closed ISA., why that stopped with the ISA being made publicly available is beyond me; but I guarantee that like the Linux Drivers today; they'll remain several steps behind and feature lacking compared to the Official Drivers... making Open Source a waste of time and resources.

Furthermore, I don't mean they should MAKE a MESS of HOTFIXES, naturally they should still keep the CORE STABLE DRIVER and Interchange it with a FIX/WRAPPER/SHADER (which can be optional download) during run-time or scenario helps bring a resolution.

That would still create a mess of a Codebase that would be hell to maintain and keep track of.
Learn Programming, and you'll quickly see what you're suggesting is just making more work for the Driver Team when as I've said over-and-over-and-over... most issues people bring up are things DEVELOPERS should be fixing in their own Games.

Another fact is, it is not feasible to upgrade components every two years just hoping that architecture will resolve it, one I experienced the last 6 years with AMD with MINOR success and it is a very disturbing and back-setting experience. From what I can tell, you regularly seem to have the latest hardware components from AMD and thus you won't be able to pickup such performance disturbances as easy unless you go looking for it and maybe you are afraid that improving the general performance of AMD users will affect you at the Enthusiast end. 

I rarely have "Bleeding Edge" Hardware... but I would point out., that the Drivers are typically MUCH more immature when it comes to the Latest Hardware.

It typically takes AMD between 6-12 months to get their Drivers up-to-spec for their Latest Hardware... meaning early adopters are the ones getting screwed over; as you pay a premium for the Hardware that generally is going to be plagued with issues you just have to live with until the Driver Team has the time to fix them. 

And that time disappears when AMD Corporate seem to believe a good use of time is Revamping the Settings UI & Features., or "Fixing their Reputation" by blitzing fixes for all the games people are crying about; that could (and should) be handled by Developers NOT the Driver Team.

In addition, very little games today use NVidia Gameworks, I would say from may experienced throughout history maybe 40% of games used Gameworks.

There are more than you might think that use Gameworks. 
Anything using Unity 3D or Unreal Engine., uses Gameworks... as those Middleware Engines use them.
And between 2015 - 2019... there was a MASSIVE surge in utilisation, with almost every (major) release using it. 
Don't believe me, look in the Game Directory; you'll see the Gameworks DLLs in there; because Developers are using said Middleware heavily. 

From my perspective, GPU Driver developers have a lot of control over how performance can playout, since Graphics APIs are mostly specifications and the GPU Driver developer can manipulate the data coming through that specification to disperse over the Compute Units, so why let the API hold you back? The API is also only a Black Box to game developers like a pipeline data has to be fed through, so in most cases it is probably only feasible to develop the game in one way irrespective of the system that has to be fed to the same API irrespective of the system. Therefore, what happens on the other side of the specification cab be best resolved by AMD or THE COMMUNITY (if power were given to them like on OpenGL Linux).

A Graphics Driver is supposed to do ONE thing... Allow the Hardware talk to the Interface Software (OpenGL / Vulkan / Direct3D) so that the commands a Developer uses DOES what it's suppose to.

While sure., it's the communication layer between the Graphics Interface and Hardware... so yes, there is the potential to Optimise things; this is the EXACT issue I've had with the HAL Graphics APIs for the past Decade.
They took ever increasing control AWAY from the Developer (like myself) to do something; it WAS a Black Box where you just had to hope that the Command did what it was supposed to and that the Drivers were doing it in a way that you would've or wanted done.

LLVM APIs like DirectX 12 and Vulkan are a gods send., as we (as Developers) FINALLY have that Control to more directly communicate and get Hardware to do what we want... rather than hoping Drivers are doing what they claim they are. 

What I want is for AMD to focus exclusively on just getting the Fundamental Hardware<>API working to Specification., and sorting out their Features for their Radeon Settings.

When it comes to Games., no... it's entirely down to DEVELOPERS to sort out their own messes. 
I neither want the bloat this causes to Drivers having AMD "Fix" other peoples mistakes; and I want them to stay Agile enough with a smaller codebase to be able to focus and fix things that ARE or WILL BE an issue in the future. 

That so often they just don't get time to get around to.

I mean they essentially dropped support for OpenCL on their CPUs because they just no longer had the time to invest in maintaining it. Why? Because they're now focusing on fixing countless issues that Game Developers are causing by misusing or simply not supporting their Hardware.

And this is despite an entire website FULL of information on how it works and APIs from AMD themselves to better utilise their Hardware. 
What would be great is if End-Users (Gamers) GOT on the same page., and held DEVELOPERS to the same level or scrutiny and account that they levy at AMD / NVIDIA.

Hi @leyvin 

"Learn to program". Believe me I have had my fair share of highly challenging, sleepless and unhealthy stressful years of programming. But one again this is not the point, whilst I do appreciate the effort you have put into your comment.

A fair amount of the things you say are true and like I have agreed many times it is not entirely up to the driver developers to fix game specific performance issues, what I suggested is for better consensus between the community, game developers and driver developers as part of the bigger solution, and no I did not say Open Source would be the only solution like you made your own statement "but thinking the Solution is Open Source is naïve", but that it could potentially help fix anomalies in performance where AMD do not have the time for such scenarios.

Yes I also find it very frustrating that game technical support always gives "update your drivers" as the first response without reading that one' already did that. But I have noticed they seem to be only people following a list of predetermined troubleshooting steps rather than the actual developers giving a response, and in rare cases do they report to the game developers.

"That would still create a mess of a Codebase that would be hell to maintain and keep track of." Maybe it wouldn't have to be kept track of, the point isn't to put all the load on AMD, but if they could make sections interchangeable or inject-able through something like an API where "hotfixes/injections/wrappers" could be obtained from a Open Source community whom have access to important parts of the driver source code, which doesn't have to be part of the main driver download. It's simply a suggestion out of little other suggestions, where in great likelihood performance issues will still occur due to current practices.

"There are more than you might think that use Gameworks." I am fully aware of this, I notice PhysX DLL's in a lot of game libraries, even the Hitman game's partnered by AMD, but this is the PhysX game engine on a CPU level and doesn't always underutilize hardware, theHunter Call of the Wild and Witcher 3 are good examples of this.

"A Graphics Driver is supposed to do ONE thing... Allow the Hardware talk to the Interface Software (OpenGL / Vulkan / Direct3D) so that the commands a Developer uses DOES what it's suppose to." Almost exactly what I described in my comment, except it primarily allows a software application to make system calls through the driver to the Hardware, not mainly Vice Versa. Which is why I believe something similar to command lists and context deferring must be done on AMD, which as far as I could find AMD hasn't built into their DX11 driver.

My point is not about them fixing other peoples mistakes, but rather making the driver more adaptable to the instantaneous input stream of draw calls since it seems pretty static in this regard. And I have read people giving reasons about GCN missing a software scheduler, but doesn't seem accurate since Windows 10 Harware Scheduler mode doesn't have Radeon support for my RX 480 as opposed to the GTX 1060.

And yes I was very exited for Vulkan and DX12, but only ID Tech 6, 7 utilized it very well, other games I own have a degration of performance with it, most likely due to improper implementation of it.

Kind regards

Another thing to look out for which I have investigated recently is OpenGL 4.5 vs Vulkan performance in Doom 2016 with my Computer.

On average I get the following results

RX 480 OpenGL 4.5: 40-70FPS

GTX 1060 OpenGL 4.5: 70-100FPS

RX 480 VULKAN: 100-150FPS

GTX 1060 VULKAN: 90-120FPS

Now for my RX 480 that is barely half the performance than Vulkan 90% of time, you can't tell me this is architecture alone and ID Software is definitely a company that don't take the optimization of their games up lightly so they would've most likely done all they can with that OpenGL.

I am glad the game properly renders in OpenGL 4.5 on AMD, and the game handles input response at low FPS on OpenGL 4.5 pretty well, even feels better at low FPS than the Wolfenstein MachineGames IDTech5 implementations, but this basically almost translates to DX11 performance on AMD.

What I wanted to do was test OpenGL performance for Doom on POP OS Linux, but couldn't get it past the loading screen, possibly due to the small SSD I had the OS running on.

Kind regards

0 Likes

I do not mean to continue to argue, but

"Objectively most XB1 Games will run at Lower Resolutions (720p/900p Vs. 1080p) and typically struggle to hit 30 FPS Targets compared to their PS4 Counterparts"

It's very rare that even first XB1/PS4 console versions run games at 720p, this is when they try to hit 60FPS for a game like Star Wars Battlefront or Battlefield 4, although it is true that in some cases the base XB1 runs at slightly lower resolutions than the PS4 to achieve the same frame-rate, but it isn't that bad in most cases since it depends on which features of the console and budget you prefer, we like both of them.

Then on the 30FPS, personally some games do look better and more cinematic from a couch at console 30FPS (not PC 30FPS) with higher graphical settings, and some people will call you a console peasant for saying this, but it's not down to performance alone why they lock to 30FPS in most cases. Furthermore, I have done in-depth testing in the same games on the same monitor; for example, I tested Hitman Absolution on PS3 vs my PC (locked at 30FPS in all the different possible combinations) and it just doesn't reach the smoothness of the console counterpart. In addition, some animations actually lag behind on PC when doing this and; for example, in Unreal Tournament 3 the game dynamically disables rocket lighting below 35FPS, whereas on console it doesn't even at 30FPS.

I am glad console designers chose AMD, because they made a great product, as for the reasons I am not up to date, but according to me it was just a better deal budget and partnership-wise and AMD hardware has a lot more to offer if utilized properly, which they definitely do on console.

"Yeah, this is incorrect.", definitely not false in the majority.

"It's unusual for the same Developer to work on every Platform a game is released on"

Well at least in the majority of cases they give credit to the same developer company, even though it might be a different team in the same house, although there are cases where they have a different company porting it to a different console which they did for Valve's Orange Box on PS3 for example, but at least between XBOX and Windows I have noticed the same developer logo 90% of the time.

I hear/read what you are saying about the console driver being stripped down, but honestly even a standard Xbox One can provide most if not the same "entertainment-wise" than my PC can, which means the OS can't be that much lighter anymore, since it also has to have a Windows Defender etc. built into it.

Furthermore, I have done research in the past that the base Xbox One is clocked at something like 1.2Ghz and it is basically the same CPU is my FX 8350 although changed to be an APU, therefore my CPU is clocked almost 4x higher at stock so surely it should be able to handle games in the same frame-rates but in a lot of cases it falls short; for example, the first two post 2011 Wolfenstein Games maintain 60FPS much more consistently than a similar Radeon PC does.

I would believe that API coding should be very similar between XBOX and Windows on a High language API level, since XNA games could be ported without effort from what I remember reading in my one book, but Playstation usually has a different low level API except for its' standard High Level API.

For anyone that has the same issue. My scores are down across the board for anything NOT DX12. DX12 benchmark scores increased while everything else decreased. Yes, people still use these older benchmarks on HWBot and on my personal discord server.

Driver version 21.3.1 vs 21.5.1

3DMark06
https://www.3dmark.com/compare/3dm06/18187070/3dm06/18191098#

3DMark Firestrike
https://www.3dmark.com/compare/fs/25544553/fs/25315519

3DMark11
https://www.3dmark.com/compare/3dm11/14377949/3dm11/14436035

3DMark Vantage
https://www.3dmark.com/compare/3dmv/5871476/3dmv/5875904
3DMark.com

Everything decreased except Port Royal and Time Spy, which are DX12. This, despite having a faster CPU with more cores and higher GPU clocks (according to the test results)

For anyone that has the same issue. My scores are down across the board for anything NOT DX12. DX12 benchmark scores increased while everything else decreased. Yes, people still use these older benchmarks on HWBot and on my personal discord server.

Driver version 21.3.1 vs 21.5.1

If you look closer at the details while 3DMark11 is getting a lower score,. look at the Framerates; they're 15 - 20% Higher.
No, where you're loosing the Score is almost certainly from the CPU Tests.

And this makes sense.
Remember that the R9 5800X is 2x8 Cores., where-as the R9 59x0X are 4x8 Cores (and in your case 2 Cores are disabled in each CCX to produce 24).

This means where the R9 5800X using 8 Cores this is limited to a Single CCX., for the R9 5900X; it's going over 2; and Latency is kicking in... for most games this doesn't make much of a difference, especially if they're Scalable Threading; but for a test like 3DMark11, which is limited to 8 Cores. Well that's a different story.

The same is true for 3D Mark 06... again the issue is that it doesn't know how to handle the Multiple CCX from the Processor.

If you ACTUALLY wanted to test that there was a performance difference in the Drivers., you'd use the same Hardeware.
Otherwise you're just invalidating you own results; and yeah the CPU is essentially the ENTIRE reason that only DirectX 12 (and Vulkan would as well) are showcasing the results they do.

Hi @leyvin 

When you wrote " but still a continued problem to this day is that Developers just DO NOT use the Compute Element of CPUs to offload Floating-Point Operations", were you referring to offloading floating point operations to the GPU instead of letting the CPU's FPUs do the calculations?

I wish there were a way to monitor FPU usage in MSI Afterburner like you can monitor ALU performance, then one could see how often the FPU bottlenecks, the same for the memory/bus latency between cores.

Because to me it seems there is still something bottlenecking on AMD CPUs or game code which cannot be detected in the utilization percentages.

For example, when I tested Test Drive Unlimited 2 I get the same low GPU utilization and FPS dips to 38 on FX 8350, R5 1600, but on an i5 8400 it's always above 60FPS with same GPU.

"What would be great is if End-Users (Gamers) GOT on the same page., and held DEVELOPERS to the same level or scrutiny and account that they levy at AMD / NVIDIA."

The closest end-users get to this is on Steam discussion forums, and then the developers simply do not respond and if they respond it is likely just a representative for the discussion topic.

Kind regards

0 Likes

were you referring to offloading floating point operations to the GPU instead of letting the CPU's FPUs do the calculations?

No. The reason that K15 (Bulldozer) had a Single FPU/SIMD for each Module, which it shared a Processor Thread with the ALU is because they are "Fall Back" for compatibility purposes.
Prior to Radeon GCN Compute Units being included (Fusion Media Processors / APU) there was a Fusion Compute Unit., this could be used to greatly accelerate via OpenCL 1.x any Floating Point or Vector Operation.

Now the GCN CU are better., but the FCU are still quite decent... 

For example, when I tested Test Drive Unlimited 2 I get the same low GPU utilization and FPS dips to 38 on FX 8350, R5 1600, but on an i5 8400 it's always above 60FPS with same GPU

That's much more likely to be a result of Intel Specific Optimisations.

@leyvin  thank you for your response.

The thing is my Intel i7 870 didn't perform any better also with low usages, the only relative new i5 I had my hands on was the i5 8400.

As far as I could research, the FPUs in Piledriver were designed to be able to handle both ALUs on the module simultaneously which can be believed in mutlithreaded benchmarks such as CPUz, which according to my logic is not any worse than a hyperthreaded / SMT i7 which has one FPU per core where the core can have two processes available in it's caches at a time, therefore the FPU has to be able to handle two logical cores at a time, which seems logically the same to me although Piledriver actually has two weaker physical and logical cores per FPU.

According to my understanding OpenCL was AMDs gateway to provide their own version or open source version of PhysX or Hardware Physics, but what I do not know is if it could be used with OpenGL only or with DirectX as well? I know the Tomb Raider games and Deus Ex had TressFX which was the alternative to Nv Hairworks.

Kind regards

0 Likes

As an end user i don't care how developers are coding their game, if a driver team CAN address that issue. Moreover since DX11 is high level API, it's a duty of AMD driver team to optimize their drivers for particular games AND common developing practices. If there's a bunch of popular game engines or dev techniques that clog the single CPU thread, because its the way it is, and that won't be changed in the future, it's AMD duty to address this[they are further down the line devs>AMD(driver)>end user] to increase the end-user experience. NVIDIA has shown that it can be done. Beside even if devs use best practices, making multithreaded command lists is still beneficial in any possible scenario i can imagine. It will be always better than single threaded CL.  Why didn't AMD address this throughout this years? Is their DX11 driver code really specifically fit for that single threaded CL? They don't have any modular code, so they have to rewrite EVERYTHING pertaining to DX11 operation? AMD is filthy rich now thanks to their CPU sales, their GPU also become more popular. They should finally fix this and move on to their usual DX12 projects...

AMD definitely have money -they just repurchased 4 billion worth of their stocks...Seems they have their priorities "straight" = not about customers. Board of directors is sure happy with the price stock increasing...

It's shame that AMD prime target are uneducated customers -graphic design programs running worse = professionals dont use them, streamers dont use them, hardware enthusiasts buy nvidia, so what's left are people who aren't well versed with PC hardware, kids, students, or occasionally gullible adults with too much money on hand who buy 6700XT gpus and upward. It's so annoying to hear that AMD is the good guy vs bad nvidia, you usually hear this from low educated people defending their purchase,  not even knowing of potential problems, because they don't use their hardware in proper way/to its fullest, play specific games or one game, or just don't care while having no technical expertise to even notice the problem[how many people knows about frametimes?]

0 Likes

@rainingtacco  

I agree with you to some extent, I also feel they do not have their customers in mind with the variability of their performance, @AMD mostly puts on a front with their performance graphs and they mainly perform well in some of the most popular games in conjunction with the newest CPUs. In addition, I don't know about current benchmarks but until recently I saw them testing their drivers with top-end Intel CPUs in the footnotes telling me their is something dodgy about certain CPU intense scenarios with games or drivers, but they claim to have the fastest gaming CPUs.

@AMD very seldomly attends to older driver issues and just tend to rub in under the carpet; for example, Chill still disables in certain scenarios when their seems to be too little polygons or something in a game scene and the DirectX11 multithreading still seems left out.

I don't know about their performance with graphic design, but from what I could've read their drivers seems pretty well oriented for graphic design, or for example people keep writing that their OpenGL driver is more workstation oriented. I think their streaming seems awesome at the moment, I saw a friend live streaming with Radeon software on Social Media gaming platforms and I have recorder my own gameplay without problems, so I think their recording tools are possibly the best at the moment, it seems simpler than Geforce experience, although I haven't used it that extensively.

The problem I had with Nvidia is what they did with PhysX for example, before it was theirs' people could use Ageia separately with any GPU, thankfully Nvidia updated their driver by allowing you to use PhysX as a secondary GPU again in the last few years. Furthermore, Nvidia's tools are usually closed source whereas AMD seems to have open sourced a lot of necessary things for future games, although I wish they would attend to older games as well. For example, you can't even run "Return to Castle Wolfenstein" at decent performance with their OpenGL driver.

It's a shame that @AMD are mostly focussing on their public picture and forgetting about their actual customers, they could've improved on general gaming quality a lot since 2016 but it's going to cost them to some extent, because I read a lot of people quickly jumping from Radeon to Nvidia just after they purchased new RX 6000 GPUs due to persistent driver issues.

I guess I am holding out with my Radeon this long since they haven't fixed some general quality since 2016 and I believe there is a lot of untapped potential in the cards.

Kind regards

0 Likes

Saw you tagging me @hitbm47 and thought I'd see what was going on. To be clear, as a great many people seemed to not realise what I was saying back in 2019 reading things now, is that DX11 draw call performance, if fixed, or at least improved, would likely have generic performance improvements in a lot of (all?) DX11 titles without AMD having to perform per game specific driver optimisations. You don't even need to be a code wizzard here to understand universal improvements are much easier to manage and not break in a driver than trying to make sure you don't break game specific optimisations with every new driver released. Fixing draw call performance would probably also allow for the removal of many messy game specific optimisations in the driver and thus allow for a good level of driver code cleanup.

The bad draw call performance from what I was able to tell is down to a fundamental problem with the driver stack likely where the driver just isn't multi-threaded at all (hence barely, if any, change in the 3DMark API overhead DX11 test). OpenGL AMD gave up on years ago on Windows at least so you're flogging a dead horse there, besides, modern OGL titles tend to come with Vulkan support so for PC games anyway that problem is heavily sidestepped. I look at the big picture though not the one immediately in front of me and when you do that you realise anyone that plays emulation games (PS2, Xbox, and even PS3 now too thanks to RPCS3) you need a flexible driver at least competent with all APIs and AMDs Windows driver for DX11 and OpenGL is just horrid here. I doubt it will ever happen (consider that a call to any and all coders willing to take up the challenge) but I do wonder how good a Windows driver would be with an entire rewrite with multi-threading in mind. It's not much to go on but you can get a peek at the possibilities from this article comparing the Linux driver; https://www.phoronix.com/scan.php?page=article&item=radeon-software-20&num=2 as you'll see for OGL performance at least the Linux driver really shows just how embarrassingly bad the Windows OGL version is and you only have to go as far as Basemark GPU to see AMD are consistently good with Vulkan and DX12 but run the OGL test and results will fall through the floor even on the GPU I currently use in the main test system which is a 6800XT.

How much the poor draw call performance impacts real world gaming is where the question comes in but you can say it is likely a big enough impact that fixing it would be noticed by a lot of people. Either way it requires attention by AMD even in just a testing capacity to see just how much of a problem it is. As you can read here; https://www.win-raid.com/t5258f51-AMD-Adrenalin-Driver-Analysis.html the article I did write (which in hindsight I'm not happy with and a follow up when I get the time will be more detailed) the apparently poorly optimised driver stack at worst hinders real world performance while at best it is an anomaly in 3DMarks API test that needs fixing. In light of your tests in older benchmarks and games though there is more weight to the issue being real-world and not just synthetic. Whatever the issue is, if you have a lot of DX11 titles and/or play a lot of emulation games Radeon is not the best choice you can make not only in terms of raw performance but emulation games particularly suffer with many more graphical glitches and bugs that you just don't get with nvidia drivers.

Nobody is expecting miracles for DX11 and OpenGL performance in Windows but its also not asking too much to expect these APIs to run at an acceptable level. One example I could give is with Assassins Creed Odyssey with a highly optimised RX590 for the most part you can run that game at 1440p with mostly maximum settings and get a good 50FPS, but every now and then, in situations it just shouldn't happen (looking out to sea from an island on a clear day for instance) your frame rate will drop to high 20s to low 30s, I don't have the analytical tools to say definitively but situations like that certainly look like a draw call limitation and/or a lack of threading optimisation if ever I saw one in motion. Theres a lot more I could say but this post is already rather long so I'll leave things here for now.

Hi @ketxxx 

Thank you for your input.

That looks like an impressive article, I will try to look a little more in detail when I get the time. I wanted to try Linux gaming on a small spare SSD I have, but it seems Linux doesn't play well with NTFS drives and would be best if everything could be formatted to EXT format; furthermore, I could barely get anything to launch.

I will edit or respond to this comment later.

Kind regards

0 Likes

Hi @ketxxx 

So I could not edit my last reply. I still need to look at your article, but it looks like you did vigorous testing with all the driver versions.

Yes emulation is a problem, and last I tested that RPCS3 only had OpenGL and DirectX11. In addition, yes most games I own are DirectX11, but except for that most games on digital stores are DirectX11, and this won't seem to change since it seems developers have a great time developing with it. Furthermore, only the newest OpenGL games sometimes have Vulkan alternatives. I would say only 10% of OpenGL games out there have Vulkan alternatives on Windows. Only Wolf II, Doom, Talos Principle and No Man's Sky from what I can think of.

And if you think about it, any GPU vendor should properly support all benefits of DX11, because it can't be expected from game DEVS to make two or three optimized versions of DirectX11 for their game, never mind other API's, since what if more GPU vendors enter the space except for Radeon, Nvidia, Intel what if suddenly we have five different GPU companies, then they can't do five different DX11 optimizations, they'll have to follow one spec that should theoretically use all cons of DX11's specification and then the driver should scale the rest properly, what do you think?

I guess it sounds way easier than it is, but like you said the Linux open source driver proves that it should be possible I think, since I think OpenGL and DirectX11 works quite similar in terms of draw call submission. But then again you don't know what CPU they used in those Linux benchmarks, it could've been a monster IPC Intel, since even the propriety driver had high frame rates for Linux.

Another thing, what if the driver could be dynamic.. because I don't think it is, then you wouldn't have to optimize for every game; including indie games, since then the driver could scale draw calls dynamically if some other part in the pipeline wasn't.

Some DX11 games are amazing on even RX 480 & FX 8350, such as Battlefield 4, Alien vs Predator, Splinter Cell Blacklist, Sniper Elite 3, but like 60% of the other; including Far Cry's and Crysis 3; are horrible and it's not possible for me to know what the problem is although in those cases I can tell the CPU and GPU are being way underutilized.

Kind regards

0 Likes

*I meant use one spec that can use all the pros not cons, excuse me

0 Likes

@hitbm47 Standards do exist for APIs, thats the point in DX11, DX12, OpenGL, etc. The developers and game engine used are big parts of a good implementation and performance, as is the extent of support and optimisation in the graphics hardware and driver for the API. Microsofts standards and a developers best efforts are all in vain if AMD aren't willing to improve portions of the driver for better performance in DX11 and OGL. Theres only so much slack you can pick up when one part isn't willing to pull its weight.

As for how to "fix" the driver in AMDs case making it more dynamic would quite likely make the problem worse with the driver in its current incarnation 3DMarks API test is based on dynamic batching anyone that runs the API overhead test on AMD hardware can tell you just how bad DX11 multi-threading results are. Perhaps AMD can fix the dynamic batching in the driver by making the driver itself properly threaded, asychronous compute might be another possibility for addressing the drivers dynamic batching/threading problem, or maybe the problem can't be fully fixed where its something fundamental with the hardwares architecture but ensuring the driver is written to be properly threaded you can at least alleviate the problem in highly threaded situations.

All of this is just conjecture until the specific cause of the bad 3DMark API overhead results is known but from what I have been able to learn I'd be very surprised if the problem is anything other than how the driver handles draw calls and dynamic batching requests.

Hi @ketxxx 

Yes I am fully aware of standards existing, this is why I mentioned the DX 11 specification, but it is up to AMD, Nvidia and Intel to implement the actual code for the specification in their driver, where AMD did not seem to implement optional features of DX11 such as command lists.

I also have 3dMark and yes the multithreaded draw call benchmark is worse almost every time than single threaded for Radeon. But using another program called something like multithreadedrendering11.exe which can be found by installing DirectX11 SDK shows different results, where I actually score something like 120FPS MT Deferred vs 79FPS on ST Immediate context for DX11 on my RX 480, but my GTX 1060 scores something like 180FPS MT vs 90FPS ST where in both cases the software/CPU is the bottleneck.

From what I could read up, AMD doesn't fully support multithreading in DX11, which is possibly why 3dMark shows bad results. Furthermore, AMD does support deferred contexts in DX11 but it seems still not command lists in DX11. I'll try to link the Intel article here that describes it in detail.

https://software.intel.com/content/www/us/en/develop/articles/performance-methods-and-practices-of-d...

Here it shows AMD's DX11 driver only running on one CPU thread, whereas Nvidia's runs on most available threads which seems to be why AMD can't support command lists, since they do not split their DX11 driver on multiple cores. Why not, I don't know, because as far as I know this what they do in DX12.

0 Likes

That pretty conclusively shows with the DX SKD test then that AMDs DX11 driver capabilities blow. Hard. There is now 3DMarks DX11 API test, Microsofts own DirectX 11 SDK test, and a plethora of DX11 titles that heavily rely on driver threading capabilities (the 4A Engine being a prime example) for optimal performance. It is no wonder so many games on last gen consoles struggled so badly for performance. DX11 + AMD drivers = Epic Fail. AMD, as I suspected of being the case way back even before 2019, absolutely must rewrite the DX11 GPU driver portion to be properly threaded. It's rather ironic that a company with a CPU division that trumpets multi-threading superiority over single threading can't write a GPU driver for DX11 thats properly multi-threaded.

The bad OpenGL performance of AMDs drivers is another story but one again that looks to be purely down to another crap portion of their driver. If ever you want to pick up some of AMDs driver slack check this guide out I wrote; https://www.win-raid.com/t5996f51-Guide-How-Do-I-Modify-a-Polaris-Radeon-RX-Series-GPU.html you could improve actual available bandwidth on the card by about 22% it would be interesting to see what sort of impact that makes on the DX11 SDK test.

ketxxx are you assembler code engineer? 

0 Likes

Hi @ketxxx 

Wow that seems like a considerable improvement in the bandwidth and it would be interesting to see if that makes a difference in the driver performance. I am going to keep it safe with my card, since I cannot afford losing my RX 480, which is why I am so frustrated that @AMD's RX 480 drivers has not been scaling efficiently since it launched in 2016 considering what this card has cost me.

I can also not believe that these drivers do not consider the strength's and weaknesses of their CPUs, also considering their pre-ryzen APUs such as A10-7850k can bottleneck the integrated graphics in similar scenarios.

I must admit, I have had a much better experience on their PS4 and XBOX One than my PC, considering that CPUs/APUs only run at like 1.2Ghz to 1.9Ghz range; I just remember less than 2Ghz; and it can hit 30FPS or 60FPS targets consistently, whereas my 2-4 times higher clocked 4.0Ghz FX 8350 can't hit the same targets in certain situations.

I believe this is because the consoles do not use exactly the same APIs under the hoods of the consoles, although I think on XBOX it has similar specifications than on Windows. Furthermore, maybe it is because Radeon used to be ATI and I think they performed better with Intel; for example, if you look at footnotes of AMD's Radeon technology features it is usually tested with top-end Intel CPUs.

There are videos going around on youtube that GCN only has a hardware scheduler which is apparently why it cannot use command lists in DX11, but why then can it use command lists in DX12 apparently? And Windows 10 only allows activating Hardware Accelerated Scheduling on Nvidia but not my RX 480.

I think they are deliberately hogging DX11 performance to promote DX12/Vulkan and their new Ryzen CPUs to make up for their single-threaded hold-backs in DX11 but it is shooting a lot of us including themselves to some extent in the foot.

Kind regards

0 Likes

@rainingtaccoNo I'm not but I do and have done things in similar realms covering an extremely broad area of the IT industry which makes me pretty well informed backed up by old fashioned experience of almost 30 years now but for specifics on something like driver coding you'd need to talk to an actual assembly programmer. If I coded drivers I would have been fixing AMDs driver and releasing modded versions since about 2015 (basically right around the time good threaded GPU drivers became mandatory).

@hitbm47I can guarantee that you will not hurt your RX480 optimising the memory timings In fact its actually recommended as the stock timings on Polaris cards are truly awful to the extent of some factory value timings actually being invalid. With a little further work you'd not only have a GPU that performs better but one that is more power efficient, quieter, and cooler than stock. When you know how to do it and what you are doing its only a few minutes work I could even largely optimise your card for you just by knowing the cards ASIC quality, memory the card is using and a copy of the vBIOS from it.

The GPU and CPU divisions of AMD are somewhat seperate and likelly have their own design philosophies which will inevitably sometimes lead to a comflict of interests but a sufficiently multi-threaded driver is surely something both divisions can agree on as being necessary and given how good Ryzen is with multi-threading a good threaded GPU driver would give quite large performance uplifts. You'll probably be interested to kow that one of the main reasons AMD performs so much better in DX12 is because DX12 actually makes substantially lower use of a good threaded driver as it passess off far fewer asynchronous tasks to driver work threads compared to DX11 so in reality AMDs GPU driver has had very little to nothing done to its threading capabilities. This explanation also pretty much answers your questions about those youtube videos, they are basically bore of not enough understanding, or just hating for sake of hating.

It doesn't make sense for AMD to intentionally cripple DX11 performance because they get so thoroughly outclassed by nvidia here the truth is more likely to be something laying in the realms of AMD don't want to commit the resources to rewriting the GPU driver, or the GPU driver coders AMD have are just incapable of writing a good threaded driver (prove me wrong, driver devs). Its also no secret that AMD weren't exactly flush with cash when the DX`` performance issue cropped up for them which was probably the biggest contributing factor for it not getting fixed but now AMD are not short for pennies and DX11 as well as OGL API performance are still powerful deciding factors for customers neither API are going anywhere anytime soon and we all know DX11 titles are littered everywhere so it is still in AMDs best interest to look at the DX11 problem. Will they though? Probably not, not unless there is a massive uproar about it.

0 Likes

Hi @ketxxx 

Thank you for your response.

"It doesn't make sense for AMD to intentionally cripple DX11 performance", I don't mean that they intentionally add overhead or sleeper threads (or something nasty like that), I mean that they indirectly intentionally hogg DX11 by leaving it as it is without making the best possible version for their driver and think they are making people believe DX12 is so much better (which it can be, but not to the extent they are leaving it).

"You'll probably be interested to know that one of the main reasons AMD performs so much better in DX12 is because DX12 actually makes substantially lower use of a good threaded driver" To my understanding AMD's DirectX12 performs so well to exactly the opposite reason you are describing and here is their own diagram about it,

This shows that their driver can be scaled over cores for DX12, the question is if games developers actually has to make the correct API calls for the driver to then scale over cores, but this still then shows that it should be able to in DX11 as well if they implement the necessary features.

"I can guarantee that you will not hurt your RX480 optimising the memory timings", I believe you but it's just something I feel AMD has to attend to themselves, or even first critically improve on this DX11 and OpenGL problem.

Kind regards

 

0 Likes

@hitbm47 AMD is obviously going to use slides that exaggerate how good they are in their own promotional material but if you go and read some actual developer content on DX11 vs. DX12 you'll begin to see and understand what sort of universal performance uplift improvements there are with just a good threaded driver. This developer article for example explicity details fundamental differences strictly between Microsofts DX11 vs. DX12, and not "AMD vs. nvidia" (despite the source, because again, this is developers speaking):

Introduction

The DX12 API places more responsibilities on the programmer than any former DirectX™ API. This starts with resource state barriers and continues with the use of fences to synchronize command queues. Likewise illegal API usage won’t be caught or corrected by the DX-runtime or the driver. In order to stay on top of things the developer needs to strongly leverage the debug runtime and pay close attention to any errors that get reported. Also make sure to be thoroughly familiar with the DX12 feature specifications.


https://developer.nvidia.com/dx12-dos-and-donts

That article might even explain some of the errors AMD are making in their own driver as well as conveniently allowing AMD (and nvidia, but at least NV are making articles to help developers, what are you doing AMD?) to palm off most responsibility on performance matters to the developers which is highly inappropriate IMO because a crap driver is a crap driver developers can only leverage what the driver contains and try to make the best of it. Fundamental driver improvements have to come from the driver devs. The above article does of course contain some nvidia specific content but instead of me highlighting it all I'll let this link do that for me (saves my now aging fingers):

https://www.yosoygames.com.ar/wp/2015/09/dx12-dos-and-donts-a-couple-remarks/

0 Likes

Hi @ketxxx 

I do mostly agree with this "to palm off most responsibility on performance matters to the developers which is highly inappropriate IMO because a crap driver is a crap driver", because all game developers are not necessarily computer scientists and I think it would be disadvantageous to require them to be, since there are people with very creative ideas and forcing logic into their brains can take away some of that creativity or initial excitement in my opinion. In addition, I am no expert but game engines do not necessarily allow general game developers to take full control of scalability, since for example some engines only has like 3 main loops you program in, I can't remember all but two examples is the loading loop and the update loop (for the game logic), etc.

I have not gone through that article in depth but from the first paragraph of do's and don'ts I can tell that is mostly Nvidia specific and that is actually how their DirectX11 driver works, "The idea is to get the worker threads generate command lists and for the master thread to pick those up and submit them" <-- this is exactly how Nvidia's DX11 driver works and maybe they want devs to replicate this in DX12, but this is not how AMD's DX12/Vulkan works as of my understanding with AMD draw calls can be submitted asynchronously from every CPU core/thread. The following quote also refers to Nvidia's specific driver implementation "On DX11 the driver does farm off asynchronous tasks to driver worker threads where possible" and I think this is because they are running large parts of the driver scheduling on the CPU, whereas I think AMD does the scheduling on the GPU.

If this is not true it would also contradict the performance uplifts I get in very few games such as Hitman 2 in DX12 where I can actually see the cores being utilized more evenly as if they are submitting independently and getting a 20FPS general improvement from the CPU side. Hitman 2 in DX12 actually pushes my 990FXA VRMs close to thermal throttling in certain cases. Furthermore, I get major performance benefits in Doom 2016 switching over to Vulkan, Jumping from 50FPS average to 100FPS average from my FX8350, from OpenGL to Vulkan. But other games like Serious Sam 4 and Total War Warhammer perform worse on my FX 8350 in DX12.

AMD actually has pretty good articles on GPUOpen from what I have read people saying, but these are very technical articles with actual coding I think, had a quick look years ago. What would be nice is if @AMD could respond to this forum with a more understandable answer and reason why their DirectX11 driver is as it is.

I haven't gone through this article, but here is something of DX11 on GPUOpen https://gpuopen.com/learn/optimizing-gpu-occupancy-resource-usage-large-thread-groups/

Kind regards

0 Likes

So I did some quick reading, to quote the following link

"In DirectX 11 you specify the usage which pretty much determines these factors. Then you forget about all this and assume if you got the usage right, your job is done. In reality the resources have a much more dynamic life in the background. They constantly move through different logical states depending on how they are used by the application. Before DirectX 12 all of this was hidden by the drivers," https://gpuopen.com/learn/anatomy-total-war-engine-part-3/

This tells me game developers expect the graphics driver to do all the dynamic scaling, probably for draw calls as well, which is where we need input from @amd. But then a concerning issue in the following article is that AMD made extensions for GCN after DirectX11 was released, this means developers have to specifically optimize for different graphics cards which you can imagine is too big of a task and I wonder if people are even using them, unfortunately I doubt they even are.

To quote from the DirectX11 GCN performance section, "DirectX 11 was released before the launch of AMD’s GCN architecture and therefore misses out some important features from its API. AGS allows developers to gain access to some of these features via driver extensions.", https://gpuopen.com/learn/amd-gpu-services-an-introduction/

Then they might actually have a multithreaded draw call feature, "One of the main issues with the DirectX 11 API is its relatively high driver overhead on the CPU. DrawInstancedIndirect is a great way to reduce the number of draw calls by batching together similar objects and generating the instance buffer on the GPU.  However, you can take this to the next level by batching multiple instance buffers together into one Multi Draw Indirect call (MDI). The extension allows the following code: ", https://gpuopen.com/learn/amd-gpu-services-an-introduction/

I really wish we could find out if games use these extensions?

Kind regards

0 Likes

@hitbm47  My knowledge begins to end here as I just haven't done enough looking into it but one statement that is rather backwards is DX11 being launched before GCN. All that means is the final specification for DX11 was set and known. GCN also had numerous revisions with different generations of GPUs and rebrands, if I recall correctly there was GCN 1.0, 1.1, 1.2, 2.0 (all starting with Southern Islands), 3.0 (Tonga), 4.0 (Polaris) and 5.0 (Vega).  In short support for DX11 on the hardware side wasn't / isn't an issue, or shouldn't be at least with all those revisions. As I said in an earlier post you'd need to talk to someone who writes driver code to discuss specifics on what the DX11 problem is for AMD but the few discussions I had with a couple programmers they stated pure and simple AMD simply can't code a low overhead driver due to the complexity involved or AMD are just unwilling to take the time and money investment to do so, unlike nvidia, who did make the time and money investment. This is the only thing I can dig up right now on the bad DX11 overhead for AMD when I looked into it back in 2019; https://www.reddit.com/r/Amd/comments/6rxun7/why_nvidia_performs_so_much_better_in_dx11/

In short when DX11 games are single threaded NVs driver will automatically split the workload to unused threads to circumvent the bottleneck or at least significantly reduce it. AMDs driver doesn't (afaik) do this. You can also try to analyse AMDs driver with software to analyse what its doing but I never had much success with this and I've forgotten the name of the software to do it.

EDIT: It is a different thing but in a similar situation as AMDs DX11; AMDs bad H.264 encoding. I've not tested what its like on the newest drivers fully so might be something you want to look at. https://www.youtube.com/watch?v=CLqpVImLPGE

Hi @ketxxx 

Thank you for your response, "one statement that is rather backwards is DX11 being launched before GCN. All that means is the final specification for DX11 was set and known. GCN also had numerous revisions with different generations of GPUs and rebrands", this is mentioned from AMD themselves in one of those links. Yes, I understand about the revisions but this means most of the performance improvements of GCN was not integrated into early DX11 games that launched before GCN, I doubt a lot of newer games even make calls to these extensions.

The thing with extensions as far as I understand is that they are optional and game developers have to explicitly call them through code in the game engine / game logic, otherwise they will not be taken benefit from since that part of the driver is not accessed through the game then.

A quote from FAQ on the note of Mantle which turned into Vulkan https://community.amd.com/t5/graphics/mantle-graphics-api-faq/td-p/419096

"Q. Why not use OpenGL extensions instead of a new API?
A. The design of Mantle was driven by feedback from leading game developers, who preferred the idea of a fresh start with a new API to the extension and patching of existing ones.  However, we believe that many of Mantle’s concepts are applicable to other graphics APIs, and will inspire their future development.", <-- I wonder if developers had meant that they simply want a new DX11 where they did not have to worry about new extensions but that the API would be able to incorporate the new extensions in each and every DX11 game as new cards came out, but instead AMD gave them an API where they sort off have to write some of the extensions themselves. This might be why we are still seeing DX11 games coming out.

Yes I have seen that video describing Radeon submitting draw calls on single core, which is also how I came to that Intel article I posted, but wonder what could be done for it through those extensions. I'll have a look at the video if it is not too long.

Kind regards

0 Likes

Hi @Robert_Hallock 

I hope you are well, I was hoping you could give us a simple understandable reason as to why it does not seem that AMD Radeon are using command lists DirectX11 (or whatever Nvidia is doing to distribute draw call work to other cores).

It would really be beneficial, as I saw this post of you https://community.amd.com/t5/blogs/directx-12-unleashes-amd-fx-processors-in-battlefield-1/ba-p/4153...

Kind regards

0 Likes

@hitbm47 To clear any misunderstandings up here I don't (or anybody else being sensible) expect AMD to invest huge resources into DX11 or OpenGL, you are quite right about certain DX11 extensions not being supported in the AMD driver (DX11 command lists for example, AMD never could get them to work properly AFAIK so just.. gave up with them) which is why my conclusion was (and still is really because DX11 isn't going away and people will still want to play DX11 games in the future) based around AMD using a driver level technique allowing for DX11 titles that are heavily single threaded to pass off some of the workload to other threads rather than slamming that one thread with everything. CPU usage in these cases would be a bit higher (5-10% maybe) but DX11 performance could potentially increase by 20-30%. Its a universal approach if you will that would (again, in theory) guarantee a noticeably better DX11 experience with (hopefully) not too much work required to the driver. OpenGL is a bust with AMD they clearly don't know how to fix it and have given up on that too but I don't see why they don't let the folks doing the open source OpenGL Linux driver code overhaul the Windows OGL code as they clearly know what they are doing and how to do it. Do those things AMD and you can call it a day with both APIs, then we'll just have the crappy H.264 encoding to complain about.

Hi @ketxxx 

I mostly agree with you, but they are spending quite a lot of resources on DirectX11 if you think about it, just not on the parts we require. For example, they made the Radeon Overlay work, made Chill work and for a lot of cards Radeon Image Sharpening was first supported in DirectX11. These are all just as much effort I would guess, except if they have to completely move their driver for all DirectX APIs to a CPU level just to get Command Lists working, which is what it seems like Nvidia possibly did at the time, it might not be a lack of hardware support.

As far as my understanding extensions are separate driver functions that can be called for a specific GPU brand exclusive from the DirectX specification, I do not think Command Lists are extensions to DirectX11 but rather an optional features in the DirectX11 specification that might work for all applications as soon as they implement it. But, it rather seems that @AMD bargains on their new CPUs to brute force through the single core load so that their total driver overhead over the CPU is less if you get what I am saying. But this invalidates the support we all gave on their pre-Ryzen CPUs and APUs, since it doesn't take us into account, except for the very few games that properly supports DX12 or Vulkan, and only about 1/3rd of them do.

Apparently, they might port the Linux driver over as soon as they've got the propriety driver on Linux fully re-integrated with the Open Source driver. There is something about the Open Source driver not being work-station oriented enough yet like their propriety driver.

Kind regards

0 Likes