Showing results for 
Search instead for 
Did you mean: 

General Discussions

AMD fires back at 'Super' NVIDIA with Radeon RX 5700 price cuts

MD unveiled its new Radeon RX 5700 line of graphics cards with 7nm chips at E3 last month, and with just days to go before they launch on July 7th, the company has announced new pricing. In the "spirit" of competition that it says is "heating up" in the graphics market -- specifically NVIDIA's "Super" new RTX cards -- all three versions of the graphics card will be cheaper than we thought.

The standard Radeon RX 5700 with 36 compute units and speeds of up to 1.7GHz was originally announced at $379, but will instead hit shelves at $349 -- the same price as NVIDIA's RTX 2060. The 5700 XT card that brings 40 compute units and up to 1.9GHz speed will be $50 cheaper than expected, launching at $399. The same goes for the 50th Anniversary with a slightly higher boost speed and stylish gold trim that will cost $449 instead of $499.

That's enough to keep them both cheaper than the $499 RTX 2070 Super -- we'll have to wait for the performance reviews to find out if it's enough to make sure they're still relevant.

AMD fires back at 'Super' NVIDIA with Radeon RX 5700 price cuts 

1,953 Replies

Review: Nvidia Turing Architecture Examined And Explained - Graphics - 
"Right-o, let's dig deeper. You know how we mentioned that Turing takes liberally from the Volta playbook? That's in evidence when looking at one of Turing's SM units. The biggest change from Pascal is that Nvidia now puts in 64 FP32 and 64 INT32 (floating point and integers, respectively) into each SM, rather than have 128 FP32. Still, counting CUDA cores properly means only taking the FP32 into account, so each SM has half the CUDA cores as Pascal."


Just remember, FP32 performance doesn't mean anything for games, AMD proved that quite nicely with how Fiji performs in relation to Polaris and Navi cards with lower FP32 performance. To borrow a couple of charts from TweakTown's article on how the Fury X performs in 2020, despite having twice the computational power of the 5500XT and 5600XT, it performs much more poorly in comparison. Also visible are how the weaker 5700XT outperforms the more computationally powerful Vega 64 liquid.

That's also been part of the conversation on WCCFTech, and likely other sites, which are talking about how Ampere's dramatic compute performance increases seems, at least in nVidia's released benchmarks, to also translate into the dramatic rasterization performance increases.


The R9 Fury X is on 28nm therfore takes more power consumption and has a larger die.
The Drivers for FuryX get no optimization for years.

RX5700XT / Navi have had architecture changes to improve gaming performance.
I am well aware that bigger FP32 performance does not mean better gaming FPS performance comparing different GPUs on different processes and architecture.

I was just pointing out what CUDA core is and how they are counted.

The Nvidia GPUs are providing a big uplift in rasterisation performance.
There is a Digital Foundry video about it.

colesdav wrote:

What is a CUDA core on:




That applies to the number of unified shaders on a given card. NVIDIA has marketed their cards for compute work such as video encoding etc. 


So according to this video AMD board partners are being told by AMD to expect Big Navi to compete no higher than a RTX 3070. So once again is AMD only going to compete at the mid level? 

What "if" BIG Navi isn't so BIG? AMD Giving Up on the High-End... Again - YouTube 


nobody has Ampere yet, release day is september 17


Digital Foundry do.

Everyone is doing "back of the envelope calculations" based on RX5700XT performance and Gears 5 on XBOX series X.

Problem is.

(1). No one outside AMD/Microsoft know how much XBOX Series X GPU power is.

(2). There is a video about Gears 5 on XBOX Series X that claims "RTX2080 rasterisation performance"
(3). No one knows if AMD will release BIg Navi with GDDR6, or HBM2e or both for gamers. I do not think GDDR6X is an option.

(4). There is the AMD Claim of +50% Performance/Watt on RDNA2. But who knows if they met that or if that is based on XBOX Series X or if it scales to larger GPU die / higher performance level/ higher temps.
(5), No one knows exactly how "Big" Big Navi will be. 
(6). Will Big Navi be on TSMC 7nm DUV or EUV? The newer process should allow a 10% performance gain, apparently. However it is very likely more profitable to use newer process capacity for Zen3. 
(7). Finally - no one knows exactly what RTX3070 performance is yet, apart from Nvidia & those authorised to test it.

I have already compared running the following cards on Gears 5 at 4K Ultra with Min FPS at "Normal" (no Dynamic Resolution Scaling).

RTX2080 OC - stock settings for GPU and Mem Clocks. Power set to Max Performance. Fans 100%. Total Power into card (GPU and PCB) = 225 W.

RX Vega 64 Liquid  - GPU +0.5% and HBM2 at 1200MHz. Clocks. Max Performance. Power +50%, Fans 100%. Total Power into GPU 360 W. 
RX5700XT     - stock settings for GPU and Mem Clocks. Power +50%, Fans 100%. Total Power into GPU die reported as 260 W. 

Based on those tests and the normal rumors ... my gut feeling / guestimate is the following:

I think Big Navi will struggle to beat an RTX2080Ti if it uses GDDR6.

I think they will need to use HBM2e to reduce power used in the memory controller and memory, and decrease latency into the GPU Pipeline to perform much better than RTX2080Ti.

I hope there will be a Big Navi reference version  2 slot 40mm GPU with a good AIO cooler and large enough radiator with 16GB of HBM2e.

I hope AMD have considered HBCC for Big Navi.
They may have gone all the way and allow fitting an on GPU SSD, however loading GPU over PCIe4 from DRAM may make HBCC much better without it. 

I think the AMD Raytracing solution will be poor in comparison to what Nvidia offer on RTX2080.

I hope AMD Reference PCB actually monitors and controls power into the GPU Core and PCB.
I hope AMD do not set the GPU power > 225 Watts because I have seen RX5700XT and RX Vega 64 in an AMD Blower Reference Cooler.

But the above are all guesses.

The RDNA2 architecture changes will have to be significant because shifting to newest TSMC process is only 10% performance gain.
I have very low confidence level about Big Navi and new RDNA2 drivers.

I would rather AMD wait to see how RTX3000 series perform, estimate how Nvidia Ti versions could perform, and then decide what to do on Big Navi.
If that means there is only an announcement about Big Navi in 2020 and no real launch then fine.
I would rather AMD got the launch Drivers VBIOS and Adrenalin 2020 GUI/UI fixed, and do it right rather than repeating Vega, Radeon VII and Navi failures.

The video`s you post are all speculations, just wait for the reviews for Ampere and RDNA2.

Speculations and rumours should be let go, independent reviews are what counts.


Partially true. Digital Foundry has a real 3080. They are not showing actual FPS due to NDA but are showing real world percentage differences. Regardless this thread has been pretty much nothing but speculations, opinions and what ifs for a long time now. 


The Digital Foundry Video is not speculation at all.


Well - for six days since the Nvidia RTX3000 Event I guess it has been.
Perhaps we should discuss AMD latest product launch, just like on Reddit?

AMD Custom Cruiser Bike
AMD Custom Mountain Bike

According to a moderator on the TomsHardware forums, the mountain bike is the same exact bike being sold at Walmart for $148, just with the AMD logo and orange tires. This would continue in the realm of "really garbage products bearing the AMD logo" along with AMD Gaming RAM and AMD Gaming SSDs

A little bit of an information compilation about Zen 3 and other things AMD has upcoming other than RDNA2, which we know from the lack of regulatory filings as well as AMD's statements, is months away.

I thought I would excerpt this section, since it proves yet again that absolutely nothing is preventing AMD from supporting X370/350 motherboards with Zen 3, since they are going to do blow all backwards compatibility with the X470/B450 motherboards to allow them to use Zen 3 processors, and according to GamersNexus, that was their primary argument, that they didn't want flash-forwarded motherboards incompatible with Zen, Zen+, and Zen 2 on the used market, especially in the Asia/Pacific region where upgrade cycles are longer than US and Euro regions.


colesdav wrote:

Digital Foundry do.

When I posted my comments on Ampere I was disappointed at the lack of adequate VRAM.

I have commented often about 16GB VRAM. This would be able to exceed the console which is desirable.

AMD does not like me posting my content, you loss.


The one way BIOS flash is nonsense.
I will likely end up buying a new motherboard for Zen3.
As for RDNA2 ...


The interesting thing is that the mystery skew from Nvidia, likely is the RTX 3070 super/Ti or 3075. Whatever they call it.

While it may miff a few 3070 buyers if they could have had a faster and more memory card for 50-100 bucks more, it makes perfect sense for Nvidia to do this. They pretty much already stole all of AMD's thunder on their GPU upcoming launch. And will likely do the same knocking out big navi at launch with a direct skew. Now with pretty much everyone expecting that no matter what that the RT performance will be inferior and likely DLSS too why would you put up with bad drivers, lesser features at the same or similar, level and price. 

Seems that Nvidia has had a pretty good idea what RDNA 2 is going to do before AMD does. Obviously time will tell, but the prior history is also telling and it so far feels pretty familiar. 


I am hoping this one way bios flash is not all makers. Make no sense for instance on my MSI Tomahawk it can flash directly from USB. Then they had also said you had to request the bios which is also crap as it should just be available for download. They key here is, whether it will work right or not. I will wait and see what early adopters say and may do the same as you, if I even upgrade this time. I am kind of leaning towards waiting till the next gen after all at this point it is likely only about a year away and likely will be a new socket, DDR5 and maybe even PCIe 5. Most importantly that board may actually be upgradeable again to a future CPU. 


I am not particularly bothered to upgrade my RTX2080 OC immediately because - they are pretty much all I need at the moment.
Blender performance is very interesting though.
I do not have an 8K Monitor. - How much do they cost?  $1500 dollars or more?
4K 60Hz low cost monitor is enough for me at the moment.
There are not that many RTX games around yet.
The high power consumption (350/320 watts) of the RTX3090 and 3080 are a concern to me.
I think that Nvidia will release "Super" versions if they have to, just like they did last time. 

RE: Now with pretty much everyone expecting that no matter what that the RT performance will be inferior and likely DLSS too why would you put up with bad drivers, lesser features at the same or similar, level and price. 

We still do not know what will happen. Maybe AMD realise they can't repeat past launch mistakes and maybe things will change/improve this time.


I need to do more PC builds and I am waiting for Zen3 launch to see what they offer.


Seems like nVidia may already be prepared to launch "super" or "AMP'd" editions, or whatever they will call them, at the same time, since they didn't reveal the 3060 or anything lower and we know Turing has been discontinued for some time now.

But yes, the software side is going to have to be completely overhauled, and I would dare say there's going to have to be a press event, either digital or physical, where Andrej Zdravkovic himself will go over the steps he has taken in that department to bring faith back to the AMD.


Seems strange that the new UEFI can only support Zen3 on the 400 series.  There was room on the ROM for Zen, Zen+ and Zen2 CPUs, now you can delete those and it only supports Zen3?  Why not at least Zen2 and Zen3?  Does that mean the socket won't support the 4000 series APUs which are Zen2 and not Zen3?

I probably won't be making any moves until PCIe 5.0 and DDR5 at this point.

"Seems like nVidia may already be prepared to launch "super" or "AMP'd" editions"

That seems to indicate that NVidia isn't sure performance alone will differentiate it from RDNA2, especially with the higher VRAM.  Of course, if some of the Big Navi pricing rumors are to be believed, they'll all be bought by miners, so NVidia may have nothing to worry about.

Reputable leaker, who was correct about the RTX 3070-3090 specs before their official reveal, has leaked what could be a potential RTX 3060. The specs are in line with what you would expect, and the thinking is that it'll be announced officially on or after October 7th, when AMD officially announces RDNA2. That'd mean the "entry level" gaming card from team green would have the power in the area of the 2080 Super, if not the 2080Ti, for around $400.

This is not very good news for AMD, since it severely limits their pricing game, IF the full fledged RDNA2 card will only compete against the 3070, and is not good news for the buying public since it means GPU prices will remain in the stratosphere...


Great video! Very educational for me. Much better than any I have seen so far at explaining the architecture. 

What I think is very interesting there is that it seems he is insinuating that the Vega we got really wasn't the Vega that was planned. It seems as if Navi may be what Vega was supposed to be. 

It would be interesting then if what Intel ends up with will be more like Navi?

I see tom's hardware has the wrong image. NVIDIA said 2x 8-pin for RTX 3090 and RTX 3080.

I guess nobody does any research or fact checking like i do.


No idea what you are talking about. If it is the toms hardware link that black zion posted. They are talking about a potential 3060 variant and the photo is of the 3070 which it clearly says below it. 

I saw no mention of the power that a 3080 or 90 uses. 

Not sure that anyone knows if the 3070 even uses 2x8 adapter for it or not. I would guess that nobody knows what any other coming card uses either. 


All the officially revealed custom RTX 3080 and 3090 boards so far all use 2-8 pin connectors, the 3090 FE is the only one confirmed to use the 1-12 pin.


And now it seems AMD isn't going to wait, and that something will be revealed tomorrow. Frank Azor, AMD's Chief Architect of Gaming Solutions & Marketing, tweeted the following:

What it will be is anyone's guess, but it's going to have to be big if they want to knock nVidia off the front page of every tech site. This could be RDNA2 or Zen 3 related, likely it's RDNA2 related...


RE: the Vega we got really wasn't the Vega that was planned

Nvidia are using the same image showing one 12 pin connector on RTX3070 Founders Edition.
GeForce RTX 3070 Graphics Card | NVIDIA 

AIB cards require single supplimentary power connectors as follows: 

RTX3090 : TDP= 350 Watts: Two 8pin.
RTX3080 : TDP= 320Watts: Two 8pin.

RTX3070 : TDP= 220Watts: One 8pin.

PCIe Power ratings:

PCIe slot = 75 Watts.
Supplimentary PCIe 6pin = 75 Watts.
Supplimentary PCIe 8pin = 150 Watts. 

Is the 12 pin connector shown on the Nvidia Web Page for the RTX3070 Founders Edition overkill?
Possibly since it should only need PCIe slot + 1 single 8 pin connector = 75+150 = 225 Watts.

Unless  Nvidia have an additional surprise for AMD with the RTX 3070 FE...
It might pe possible to use the 2 8pin to single 12 pin connector on the RTX3070 FE  to deliver more power to the RTX3070FE GPU and overclock the GPU for more performance.

I believe all the 3 announced FE cards use the 12 pin. Only ones I have seen with 2 8 pins are the cards shown by AIB partners. Still no idea what HC was talking about. 

Glad to hear it. They need some serious damage control. Hope it is good news and not just confirmation of the speculation. 


I saw 12 pin connectors on the pics of all 3 FE cards. 


Another thing Nvidia may have done is have some spare CUs on the RTX3070 die which are currently disabled.
That way if AMD do release a Big Navi that outperforms it they might be able to unlock the spare cores instead of releasing a "Super Version".