cancel
Showing results for 
Search instead for 
Did you mean: 

General Discussions

Rumor: AMD Radeon RX 3080 XT Will Challenge RTX 2070 at $339

There’s a new set of rumors around AMD’s upcoming Navi GPUs, though based on their contents and structure, we’d advise you take them with a substantial grain of salt. While they’re eye-opening, it’s not at all clear they’re accurate.

Hot Hardware reports on rumors that the Radeon RX 3080 XT will match the performance of the GeForce RTX 2070,SEEAMAZON_ET_135 See Amazon ET commerce but undercut that GPU on price, coming in at $330. This, of course, would match the old price on the GTX 1070, and might be read as AMD “restoring” the GPU market to its original, pre-RTX configuration. The GPU will reportedly be based on Navi 10 and ship with 8GB of GDDR6 memory. The RX 3080 XT is supposedly a 56 CU card; higher-end models with 60 CUs and 64 CUs, respectively, will be reserved for the Navi 20 GPU family, which isn’t expected until 2020. TDP for the Navi 56 GPU is supposedly 195W.

Navi-Rumors

* – Unconfirmed part. Radeon VII provided for reference.

I’m skeptical of this claim for several reasons. First, it implies that AMD made a decision to build two different GPUs around a very narrow difference in core count. A 56 CU Vega 10/RX 3080 XT would have 3,584 GPU cores. A 60 CU Navi 20 (hypothetically branded as the RX 3090) would be 3,840 cores. That’s just a 7 percent difference in core count. Even if AMD goes for higher numbers of cores per CU (say, 128 instead of 64), the percentage gap between the absolute core count won’t change. While Nvidia used separate physical GPUs for the RTX 2070 and RTX 2080, the RTX 2080 has 1.27x more GPU cores than the RTX 2070. It seems unlikely that AMD would build two completely GPU designs solely on the basis of a 7 percent core difference.

Next, there’s the question of TDP. Except for the Radeon Nano, which offered the same physical configuration as the Radeon Fury but in a substantially improved TDP, GPU TDP typically holds constant or increases as you step up the stack. The Radeon RX 3090 is supposedly a 180W TDP card, whereas this years’ Radeon RX 3080 is a 195W TDP card. This could reflect the fact that Navi 20 might be built with EUV, but Navi 20’s TDPs probably aren’t even known yet, and neither is the question of whether it’s built on 7FF+ at TSMC. Even assuming the chip is built on TSMC’s 7nm EUV process, it isn’t clear if AMD would have silicon back to typify the TDP range at this point in the development process. Assuming Navi 20’s 2020 target is accurate, it’s early to be hearing about formal TDPs.

The price banding on this proposed stack is also odd. If the RX 3080 XT is a $330 card, but the RX 3090 has only 7 percent more cores than the RX 3080, then the only way to justify the 1.26x price increase is going to be with a substantial clock leap. GPU pricing is typically commensurate to performance in these ranges, as this chart from Anandtech’s Turing coverage makes clear.

Image by Anandtech

If AMD is going to slap a 1.26x price increase on the RX 3090 (from $339 to $430), it’s going to have to deliver increased performance. A 7 percent core count increase isn’t going to cut it, which means a 1.2x – 1.3x clock jump (clock speed increases may not deliver a perfectly linear performance improvement). Given that we know Navi is still based on GCN, taking bets on high core clocks is a risky proposition. One of the definingtraits of GCN is that it is not a high-clock architecture.

Could AMD have finally managed to solve this problem with Navi and 7nm? Sure. But given that we’ve seen GCN fail to match Nvidia clocks from the Maxwell era (when the gap was relatively small), straight through Fury X, Polaris, Vega, and Vega 20 on 7nm, we’re going to have to see the gains to believe them. It’s much easier to imagine that AMD went wide with Navi, taking advantage of the die shrink to further increase core counts, than to picture the architecture suddenly gaining another 400-500MHz of clock. We know that AMD made gains on die size — Radeon VII is 331mm2, compared with a 487mm2 die size for Radeon Vega 64. In previous conversations with the company, AMD engineers have indicated that the 4096-core count on Fury X and Vega 64 was not an absolute, intractable limit, but partly a function of a desire to constrain die size and continue to fit HBM2 easily on a desired package. This doesn’t mean AMD automatically built a larger chip, but the 7nm die shrink affords them the capability to theoretically do so.

The issue of mismatched pricing to performance gains compounds with the supposed Radeon RX 3090 XT, which is 1.16x more expensive than the RX 3090 but offers only 10 percent more performance. This means the RX 3090 XT would be roughly 10 percent faster than the GeForce RTX 2080 at $500, but it also means the RX 3090 wouldn’t bring much in the way of additional performance to the table, though it would represent a substantial price cut over the Radeon VII. Our Radeon VII benchmarks from our review are shown below:

Ashes isn't a great showing for AMD, even when using DirectX 12. The Radeon VII isn't far off the GTX 1080 Ti, but it lags back from the RTX 2080. It's not great when a brand-new GPU design on 7nm can't quite match a 16nm GPU that launched nearly two years ago.

Last year, there was a persistent rumor going around that AMD would bring Navi 10 to market at $250, breaking the back of the GeForce RTX 2070 at $500. Now, the price has jumped back to $330. Price is always the last thing set before a launch, which is why we knew this rumor was wrong back in December. Could $330 be the right target? Yes. But given that AMD will supposedly launch Navi 10 at E3, it’s also possible the company is still finalizing its prices. The wild rumors around AMD’s supposed plan to inflate core counts and slash per-core pricing on its third-generation Ryzen CPUs are inaccurate, as we’ve explained before. And these rumors don’t agree with previous rumors as far as TDP or price. The “old” rumor around the RTX 3080 (no XT) suggested a $250 card at a 150W TDP against the GTX 1080 / RTX 2070, not a $330  GPU competing against GTX 1080 / RTX 2070 at a 195W TDP. That doesn’t mean the new rumors are wrong, but clearly someone is. These rumors don’t make great sense either.

AMD-Navi-Lineup-Theory

Previous rumored configuration, with substantially lower TDP and price.

The Question of Price

It also isn’t clear exactly how AMD will respond to Nvidia’s attempts to raise GPU prices in 2018-2019. On the one hand, enthusiasts would obviously love to see AMD restore the old stack and gut Nvidia’s pricing model. AMD has pulled this type of trick on Nvidia before — back when Team Green launched the GT200 family, AMD’s HD 4000 family was such a strong counter, Nvidia had to slash its launch pricing and introduce a new, faster variant of its second-highest-end GPU.

But there are risks to AMD if it takes this strategy that the company will be closely considering. Back in the GT200 era, the only difference between AMD and Nvidia in terms of GPU features was features like PhysX. Nvidia is putting a much heavier push on ray tracing than it did on PhysX, and actively attempting to position the capability as the future of GPU rendering.

If AMD undercuts Nvidia’s GPU pricing and does so with GPUs that lack ray tracing, it could be read as a tacit admission that Nvidia has established ray tracing as a feature that customers will pay more for. When AMD introduced Radeon VII, it deliberately didn’t price that GPU any lower than the RTX 2080, despite the fact that the Radeon VII completely lacks ray tracing. It is possible that the company will do something similar here, or choose to split the difference by pricing below the equivalent RTX GPU it intends to compete with, but not so low as to imply that Nvidia has properly priced in the value of ray tracing. Despite reports that the PS5 will feature ray tracing, we’ve heard nothing about Navi 10 supporting this feature. And AMD has said it wants to wait to introduce RT until it can introduce it at the top to bottom of the stack. That could mean AMD is keeping quiet about ray tracing support on 7nm — or that it doesn’t intend to introduce the feature in 2019.

But as it stands, this rumor is, at best, incomplete. It implies an odd pricing structure that would require AMD to hit much higher clocks on GCN than it has ever demonstrated a capability to hit. The core counts also imply that AMD is relying heavily on efficiency gains to hit its performance targets, but efficiency gains in GPUs have been hard to come by of late. Vega was not, generally speaking, a large efficiency gain over previous versions of GCN. Could Navi change that? Yes. But historically, we’ve seen GPUs gain the most performance either by clock boosts (which GCN hasn’t been very good at) or core count increases (which this rumor implies have not occurred).

If this rumor is accurate, AMD either substantially improved Navi GPU innate efficiency compared with previous iterations of GCN or will content itself with slashing price, but not necessarily driving performance higher, with top-end performance at $500 that would still be below RTX 2080 Ti (albeit at a vastly lower cost). The proposed price structure makes limited sense without massive clock increases to drive performance in the upper tier products. And finally, it’s not clear why AMD would build two completely different chips between Navi 10 and Navi 20 if the difference between the two is just a 7 percent core count increase. This is much less of a gap than exists between the various Nvidia GPUs in their respective brackets and custom designs.

Rumor: AMD Radeon RX 3080 XT Will Challenge RTX 2070 at $339 - ExtremeTech 

2 Replies

Very interesting news. If they truly have a product that is as fast or faster than an RTX 2070 for 300 bucks I would be very much interested in being a buyer. That being said it would also be contingent on this new product actually working at default setting. Meaning it works out of the box with no tweaking. No voltage changes or speed changes to get it stable. If I see this product return to that type of experience I will for sure buy one as it is right in my sweet spot on price and performance.

0 Likes
leyvin
Miniboss

The rumours that I've been hearing about, at least from a month or so ago... was that Navi is without a doubt GCN 2.x Based., but has removed a lot of legacy GCN 1.x elements. 

In terms of performance it was showcasing a quite impressive 30-35% Gain / CU over Polaris, however at the same time I've been hearing issues with Drivers and Stability in "Real World" Scenario (ala Vega 1st and 2nd Gen)

That is to say that under DirectX 12 / Vulkan., it's apparently an incredible Architecture that works amazingly but with the Legacy APIs such-as DirectX 11 / 10 / 9 and OpenGL... it's simply an unstable mess; with perhaps worse being that TSMC 7nm was great in Low Product; but seems to have a lot of issues with High Production (i.e. what's needed for Retail) Numbers. 

And that wouldn't exactly be surprising as NVIDIA also had reservations about TSMC 10nm (hence why they at the last minute chose to remain on 12nm) but even then saw a similar situation where the GTX Titan V was "Fine" in Low Production; but the GTX 20-Series in High Production (especially the Performance Models) were having production issues with very low yields. 

The irony being that Zen 2 had initial issues with 7nm (unlike Vega / Navi) in Engineering Samples., but have since been "Okay" when ramped for full Production.

Although I've also been seeing their Clocks are 2-300MHz LOWER than initial hopes... Zen 2 is seeing big gains from other areas of the Architecture; so that isn't really a major issue, just will perhaps be a disappointment to Enthusiasts seeing an increase from 4.2GHz to 4.4 - 4.5GHz instead of the expect 4.6 - 5.0GHz they were hoping for.

Still for Navi, I can see half of their performance gains stemming from the increase from 1400MHz (Polaris 20) to 1700MHz (Navi 10) … which if it can't consistently hit that., well then that'll be problematic, as unlike Zen; GCN isn't exactly dominant over Pascal / Turing outside of the Low-Level Graphics APIs that are STILL not in widespread usage.

And frankly AMD have only themselves to blame for that, by not investing heavily into GPU Open and creating a Low-Level API Toolkit (something like Gameworks) to encourage adoption and more easily facilitate transition from the HAL to CTM approach. As well a lot of these "Kid" Developers have NEVER for their entire careers used DirectX 9/10/11 that are VERY similar in their approach and utilisation; are easily migrated between and are highly Abstracted. 

Or worse still only use Middleware Engines; which again AMD doesn't not have much influence over to encourage Low-Level API Adoption.

With the biggest kicker being, many of the Studios ARE using Vulkan for Switch and PlayStation; but then completely ignoring it for PC / Xbox One; or even supporting DirectX 12 over DirectX 11... for "Reasons" typically that said platform ports are offloaded to 3rd Party Developers who exclusively work on said Platforms and have ZERO intention of supporting new API (or barebones) typically because NVIDIA DO have influence over them, or they heavily rely on Gameworks as it's the only real option as a Multiplatform Toolkit. 

AMD HAVE to realise that simply releasing Good Hardware IS NOT enough. 

Still, beyond that they have no real intention to compete against NVIDIA. 

Sure they intend to have a High-End Competitor (FURY / Vega / Navi) but those aren't competing at the Bleeding Edge, just merely within the Consumer "Enthusiast" Market... even then just trading blows., and not doing anything unique that they're ensuring to have "on-the-books" Developers really take advantage of to a point where Tech Press can't just handwave away as 'an exception' as opposed to a rule.

And this makes sense as AMD's Dominance (as much as they have such) is in the Low-End to Mainstream. 

It's frankly almost as if neither AMD or NVIDIA actually have any intention of really Competing against each other.

AMD are content being the Budget Cards., NVIDIA are content being the Premium Cards... and it's very milk toast fighting over the Mainstream (Mid-Range) Sector; where we do see direct competition. Both are arguably as good as each other at the same price... Consumers ultimately don't care which they have beyond Brand Loyalty; which is a BAD situation to have, because Brand Loyalty is usually ONLY as Loyal as the Marketing Drives. 

Fall out of Favour, and you've lost said Market for at least a Generation. 

It's why I'd argue that AMD should never have catered to the Cryptocurrency or allowed their AIB Partners to do so.

As they could've easily used such to destroy NVIDIA's competitive edge in the Mainstream Market but also their Branding … establishing the RX 480 / RX 580 as the "Kings of Mainstream" while leaving NVIDIA to chase the imaginary Cryptocurrency Gold Rush. 

The Delays that have pushed Navi back by 6 months (from it's original launch) as well as AMD pushing the Radeon VII (not a Gaming Card but a Prosumer Card, sold as IF it were a Gaming Card as a Stopgap... which was a BAD freaking move. It should've been Sold as a Premium Prosumer / Developer Card, with a focus on the Creative Development not Gaming Capabilities) well it just has created more hurdles to overcome. 

That said, they should NEVER handled Polaris the way they have either. 

The RX 500-Series was released too early., and should've Q4 2017 with another 300MHz; and on the same 12nm that Ryzen 2000 Series used. 

I'd also have increased the Compute Unit Counts.

RX 550 to 16CU (1200MHz / 4GB)

RX 560 to 24CU (1300MHz / 4GB)

RX 570 to 32CU (1400MHz / 4-8GB)

RX 580 to 40CU (1500MHz / 8GB)

As well as introducing an RX 590 with 56CU (to compete with the GTX 1070) instead of the RX Vega 56. 

In said case I'd have reserved the RX Vega as "Prosumer" Cards... with 16GB HBM2., not having much better performance than the RX 500-Series but having the VRAM for Professional Workloads while still remaining a capable Gaming GPU. 

With the first being a 14nm, second being a 12nm and the third being a 7nm. 

These could effectively be rebranded Radeon Instinct with Display Ports... thus their costs be underwritten via said Accelerator Sales (much like Radeon VII should've been IF it has been $100 more expensive and marketed AS a Prosumer as opposed to an Enthusiast Card; with the focus on Maya / POV Ray / Blender / etc. Performance Scores over the Gaming Benchmark Results)

And as noted, essentially disable the ability to do Blockchain / Mining on the Retail (RX) Cards.

Instead introduce for "Public" Availability the Radeon Instinct... as remember you can only get / use Instinct via a 3rd Party Retailer as part of Server Packages; you can't just purchase them "Off the Shelf" but you SHOULD be able to. 

If this had been done with a focus on creating a Low-Level Graphics API Toolkit., that essentially allowed Developers to rapidly and easily (with little of their own effort) begin to take advantage of not only the improved performance of GCN (which was BUILD for said APIs in mind) but also the GCN "Party Trick" Features; such-as FP16.

Implementing FP16 Support on Polaris is a itch, even if you know what you're doing... having a Toolkit for such would've been invaluable. 

Same goes for how they should've heavily pushed Crossfire / Multi-GPU that they've all but abandoned. 

As 2x RX 480 is cost wise similar to a GTX 1080 (and was cheaper at launch) but provided very similar performance... as AMD have noted many times On-Card / On-Die Crossfire is a serious technical challenge., but via Software it works a treat; had they worked with the Zen Team to have a 'special' Architecture element that facilitated Crossfire to be highly efficient thus making it advantageous to have not only a Ryzen but also RX Radeon Card. 

While also offering to Developers who Support Crossfire some 'Free Samples' (which Development Studios tend not to pass up... Free Hardware provided they support X, Y, Z... they'll support X, Y, Z) … yes I know AMD does have such a program but it's hardly widespread knowledge or easy to engage with them about. 

Where-as a Program like Microsoft's ID@XBOX where they literally gave a Free Development Console to each accepted Developer into the Program (additional XDK still cost $3,000 / Unit), well it's great way to get Studios hooked into your Development Ecosystem.

Same with say Servers or Render Farms., partnering with say HP / IBM / etc. to offer Pre-Build Solutions, again with a "Sample" (which Independent Studios would've leapt at) would've gone a long way to establish an Ecosystem.

Back it up with a good Toolchain, remember AMD are the Kings of Compute... perhaps even a Developer Centric area of the Forum (yes again we have such but, I mean seriously it's not exactly Open and turns off most Developers) that's connected around the GPU Open Initiative; and they'd be in a strong position to disrupt the market, see more products that commonly showcase AMD in dominant performance positions... that drive sales but what else an increase in Brand Recognition and Trust from Consumers who see their Favourite Developers beginning to favour said Hardware.