cancel
Showing results for 
Search instead for 
Did you mean: 

General Discussions

RTX 3090 Performance Leaked - 10% faster than 3080. Is software to blame?

This is very interesting...Is it possible that the 3090's core count is -too high- for software to take advantage of? That's a situation I never thought about. CPUs definitely, since games (most of them anyway) are really just now taking advantage of more than 4 CPU cores effectively, but games having to be coded to take advantage of insane core counts?

https://wccftech.com/nvidia-geforce-rtx-3090-teclab-review-leaked/

"The first answer that comes to my mind is that the amount of core increase that we saw in the 3000 series is just too big for software stacks to handle. While the drivers would (probably) have been updated to handle the massive throughput, game code and engines have to scale up to take advantage of the available processing power as well. This is sort of like games being optimized primarily to take advantage of just 1 core and not scaling perfectly."

3 Replies

This is the sort of thing that will affect AMD, and Intel as well, as GPU core counts drastically increase, especially with RDNA3 when AMD moves to a MCM GPU model the way Ryzen CPUs are, and if core counts are going to pass 10000, which seems quite likely, then we're going to have to see a massive update in game engines across the board...

Well it is logical. Plus a great for instance is with the ID Tech engine one of the most advance up-to-date engines available and look at how well it scales with AMPRE compared to most others. 

0 Likes
leyvin
Miniboss

In some respects., Massively Multi-Core Processors are always going to have an issue with diminishing returns of parallelised code.

A • Not all Tasks can be Parallelised

B • Not all Tasks can achieve a 100% Utilisation when running in Parallel

C • Overhead for Managing Threads increased with Thread Count

Now as a keynote., the RTX 3080 has 8704 Threads while the RTX 3090 has 10496 Threads.

This is strictly speaking only an increase of 18.67% anyway... and Frequencies are almost identical.

At least on-paper... remember that while the "Boost" Figures are reported as 1.70GHz., reality is the Precision Boost Algorithm will likely keep the RTX 3080 at 2000MHz.

That's what most reviews are showcasing; and based on what Gamers Nexus have done, it's somewhat clear that that's the edge of stability and power curve for said cards; they could barely get more than 50-100MHz beyond that.

I'd imagine that for the RTX 3090... well it's just not capable of hitting even that high.

Older Graphics APIs (and I think the above figures showcase it well) simply won't be able to saturate the threads., while the Newer Games; well they're pushing Utilisation Higher thus being more Power Demanding and almost certainly causing Throttling.

As noted, the "Best Case" is 18.7% Faster., what we're seeing in the Benchmark "Best Case" is 11.5% ... if it's doing that only capable of Precision Boosting to say 1860MHz instead of 2000MHz; well then, what we're seeing is just the limitations of Ampere.

• 

Will this happen to Navi 2X as well?

Well, not really... no.

I mean keep in mind that "Biggest Navi" is only 5,120 Threads., and that's a 505mm GPU

AMD could potentially go bigger still (in-fact we've seen some murmurs of such from Radeon Instinct which there is apparently a 92CU Variant) but I doubt they could really fit much more than 10 Graphics Engines on a Single Die GPU.

Multi-Chip Processors are of course going to be something Navi' Successor will likely use., but as it stands Infinity Fabric isn't at a point yet where the Latency is low enough for Graphics Processors to be produced that way.

RDNA is firmly designed specifically to utilise such when it is ready... still we'll see in in Zen (Ryzen) before we see in in RDNA (Radeon).

We're expecting to see better latency from Zen3 (on N5) in 2021... which means 2022 is the earliest we'll see it in Radeon.

As such I think we have some time before we see a similar "Thread Wall" being hit by AMD in terms of Diminishing Returns., at least in regards to noticeable.

Keep in mind that Frequencies gains are actually ~50% at 2.0GHz+, which is to say going from 2.0GHz to 2.5GHz isn't a 22% Gain but an 11% Gain... it's why I think it'll be sensible for AMD to keep their Clock Frequencies roughly the same, even if they could go up to 2.25GHz with Navi 2nd Gen.

Leave something in the tank for Navi 3rd Gen on N5P, as they might not be able to make the same IPC gains they're making this Gen; and as I have a feeling that Navi 3rd Gen will be Refinement + Node (RX 300 Series) then yeah best bet is to be a little conservative.

Even if that means the RX 6800-Series is a little slower than the RTX 3080., the reality is the RX 6900-Series will STILL end up the Performance King... plus the Cards will run Cooler and Quieter; hopefully shaking some of the damage that GCN did to Radeon' reputation in terms of Efficiency.

NVIDIA have really thrown a lot of resources to at least keep the RTX 30-Series Cool-and-Quiet., even if they starting to challenge the FURY for power hunger. 

But I don't think NVIDIA could've played this any other way.

Turing and Ampere are arguably failed Architectures., from a technical perspective. 

They NEED something completely new... the only gains they've got has been from a Brute Force approach. 

Frequency really hasn't been lifted at all., Power Requirements have skyrocketed... they NEEDED all those extra Cores / Thread for performance gains. 

Respectable gains as they are... AMD right now just has a better Architecture., and I'd argue NVIDIA might've actually had a better idea of this had Apple not taken all of the N7 Production from AMD limiting what RDNA 1.0 they could produce to 1 Discreet and 1 OEM/Embedded Variant.

And AMD being AMD, they focused on producing their Mid-Range Options; thus that's what we got.

It's performance being a little "weaker" than the RTX 2070,. I think that ultimately has resulted in NVIDIA underestimating them... but if you think about it for a moment; their previous "Mid-Range" was competing with NVIDIA's x60 Class; which was NVIDIA's "Low-Range"., it was weird how few actually noticed how suddenly AMD's Mid-Range leapt up a Tier to compete against NVIDIA's x70 Class.

Instead people were too preoccupied with the pricing... which to a degree was likely a decision made out of greed by AMD (as they could, plus the N7 Node Prices sharply rose due to Demand and not enough Supply; so necessity as well to maintain _some_ measure of profitability) - well that and the Drivers being dire.

Still all these things culminated in the RX 5700 being a REALLY underestimated piece of Hardware, when arguably it was a clear as day harbinger.

As it stands Ryzen 4th Gen and Navi 2nd Gen... well it looks like it's going to be a 1-2 Punch to the Industry.

Personally I'm a little on the edge of my seat to see how AMD screws it up this time.

But maybe (we can hope) the Hardware is good enough to offset whatever they do to shoot themselves in the foot., a bit like how Ryzen was when it launch; as it was a TERRIBLE launch, but the Hardware was good enough (and at a reasonable enough price) to see them through it.

I'm hoping the rumours I've been hearing about pricing is true., because that would certainly level out the Market Share between AMD and NVIDIA. It'll be quite healthy to see them actually directly competing.