cancel
Showing results for 
Search instead for 
Did you mean: 

General Discussions

Yuanta Securities Investment Consulting Company - nVidia Ampere to be 50% faster, 50% more efficient than Turing

Should know more at SIGGRAPH in August, which is when nVidia has announced new architecture every two years for the last many in a row, but this is also around when "Big Navi" is expected to hit the shelves, and may yet again just be competing with old hardware against new, more efficient, faster hardware.

https://www.taipeitimes.com/News/biz/archives/2020/01/02/2003728557

0 Likes
9 Replies

As usual take every report and rumor with a mountain of salt until reputable third party reviewers have it in their systems, but given nVidia's current lead in performance per watt at so called 12nm cards, and the massive amount of room there is for improvement in ray tracing, it's really not that hard to imagine either of those being true.

0 Likes

AMD still has some uppity cards to get to market.

I would like RDNA 64 CU and 16GB VRAM in an affordable offering.

0 Likes

Well you're not going to have that, because they would be more expensive than a 2080 Ti.

0 Likes

black_zion wrote:

Well you're not going to have that, because they would be more expensive than a 2080 Ti.

that depends on the yields more than anything

vega 64 was expen$ive 

0 Likes

This will seem a little weird... but I'm going to begin this with a price breakdown of the RX 480 Reference. 

As is known at launch (August 2016) the Polaris 10, RX 480 8GB Reference was $220 MSRP.

AMD generally speaking adds 15% to the estimated production cost., primarily this is to encourage AIB Partners to keep as close to MSRP as possible. 

When we take this off., we're left with $187 as the (est.) Production Cost. 

At release GDDR5 was trading at $6.72 / 8Gb Model (1GB) at launch., but did rise to $8.38 by August 2017 before falling again to just over $7 / GB., still we're going to use the launch cost, so $54 for 8GB Models. 

As a keynote … GDDR6 costs ~70% more / GB., i.e. $12.10 / 8Gb (1GB) or £97 for 8GB. 

Now as far as the PCB and PCB Components are concerned., honestly this is the cheapest component.

Generally speaking we're talking about $18 for all of the components + $5 for PCI-Express Compliancy (i.e. License) fee. So we're looking at $23 (roughly)… and this is going to be on the "Upper End" of such costs, using High Cost Components. 

The cost of actual Production is a little more "intangible" as while the board itself is Line Produced, the Final Assembly is still done "By Hand" at a rate of ~5 Cards / Hour / Person... but as most are based in Taiwan / China, where the Hourly rate (but those Regions are typically "Better than Avg.") is only $2.8 / Hour. 

(Those curious as to "Why is everything made in China?!" … well I think you perhaps better understand now) 

Still even if we avg. that Hourly Rate by the Cards Produced per Hour... eh we're still not taking into account the Total Cost of the Building, Utilities, etc. see what I mean about intangibles.

With this said, I think we can safely over-estimate that the Production Costs are likely going to be similar to the Hourly Rate here... about $3/Unit, still let's round this up to say the PCB and Production Costs are (est.) $30.

We're probably looking at half again for the Cooling Solution... something like $15, given it's primarily materials and machining; which isn't overly expensive. 

So let's say $45 for all of that., and that's not really going to change much between the RX 480 and RX 5700. 

As such, we're now down to $88... AMD again uses a 15% Mark Up on their GPUs over Production Cost, for the Wholesale Price, meaning that the Polaris 10 was likely $75 to Produce.

This actually doesn't sound to unreasonable., given that the Ryzen 1600 (14nm / 213mm) Vs. Polaris 10 (14nm / 232mm) costs $68 to produce. As we're talking about 9% less Silicon with a similar (and very high) Yield Rate (90%+) … then a 9-10% Higher Cost for Polaris does make sense. 

Now as a keynote, Navi 10 uses 8.2% more Silicon; but in terms of the cost of 14/12nm Vs. 7nm... well that's a bit more tricky. From a Silicon perspective, the price is going to rise from $75 (Production) to $82 (Production); but does 7nm Cost more? 

Eh, well actually based upon the Radeon VII Production Costs... I'd actually argue that 7nm appears to be a little cheaper; as costs seem to be more based upon Silicon Used / Die and Yield... and the thing is 7nm Yields "Out-of-the-Box" so to speak have been on-par with current 14nm+ / 12nm Yields; rather than 14nm Launch Yields. 

This means the Wholesale GPU Price for the Navi 10, is likely to be ~$95 … which actually isn't that far from what Navi was rumoured to cost pre-launch (I believe it was rumoured to be a $96 - 98 GPU., and we're what, within a few dollars with our estimate).

As such, this gives us a great basis for what AMD could have set as the MSRP for the RX 5700 XT. 

$275 MSRP... which includes all of the normal mark-ups. 

Sure... it's a bit more expensive than the RX 480, but that's still that's $105 Lower than the Release $380 MSRP... and $175 Lower than the original $449 MSRP. 

Alright... so why do this breakdown, what does it accomplish apart from making RX 5700 owners (like myself) a little depressed about how much we're being ripped off by AMD. 

Well keep in mind that currently we're looking at Navi 20 / 21 being launched likely in Q3 2020... and so, what sort of price can we expect to see such Hardware be given it's to be a 505mm GPU.

For the most part, the rest of the components (unless they're adding more memory, maybe a 12 or 16GB Model?) will remain fairly similarly priced.

As such how much would a (80CU) 505mm GPU cost? Well, my estimate would be $188 ($163) Wholesale.

We can now determine the MSRP for different Memory Models...

Navi 21 • 8GB • $380 (Production MSRP)

Navi 21 • 12GB • $435 (Production MSRP)

Navi 21 • 16GB • $490 (Production MSRP)

Navi 21 • 8GB HBM2 • $470 (Production MSRP)

Navi 21 • 16GB HBM2 • $670 (Production MSRP)

Now where AMD actually price this is a different matter. 

I do get the feeling we're going to be seeing a Navi 21 • 16GB at $650 - 700 MSRP, but perhaps they might even price it against the RTX 2080 Ti instead of the RTX 2080 S... so that could even be $900 - $1200.

• 

As for Ampere... well it's actually amusing given Ampere is the codename for the Server Variant of Turing. 

I have no doubt it'll be impressive in the server / machine intelligence space; but it's not the next Desktop Architecture.

That's going to be Baer or Dürer, forget which of these was the name for it. 

While it might be on 7nm... NVIDIA are going to be using Samsung instead of TSMC for it., and something to keep in mind is Samsung only really have LPE 7nm, so what you can expect to see is likely a 10/7nm (similar to NVIDIA's 16/14nm for Pascal).

I'm sure from a power consumption perspective, it'll be an improvement... but likely only 30%, and as another note; Samsung DOES NOT have the rights (or production ability) to support the NVIDIA-TSMC Large Die., that means they'll be limited to 640mm instead of 800mm. 

So sure, improved density... improved power efficiency... but that will very likely result in a very small real-terms improvement in the High-End; as they might see 10-15% improvement over Turing (which was about what Turing had over Pascal in Non-RTX Scenarios).

And what we might see them doing is keeping the same Traditional Pipeline Performance., while focusing primarily on expanding the RT Core Counts to have better Ray Tracing Performance... a 25 - 35% Performance improvement in RTX Games; would actually make their x60 viable (at present RTX is a Novelty on those cards). 

Now here's the thing... what we have as rumours about Navi 21, is that it'll be 2x the Performance; but be on 7nm(+) EUV; which provides ~20% Better Density and 10% Better Power Efficiency than 7nm. 

To put that into perspective... on 7nm EUV., Navi 10 could be shrunk to use 215mm (and before anyone says... "Wait, wouldn't it be 201mm?", you have to keep in mind elements of Navi 10 are still on 14nm+ 

This means if Navi 21 was just "Doubling" then it'd be a 430mm Die... but there's 17.5% Die Space unaccounted for. 

Rumours are this will be where the AMD Hardware Ray Trace Engine will go., but we'll see. 

Based on the filed Patents, AMD is likely to be adding a VLIW/4 (Fixed Function) Pipeline; and sure that could potentially be used for Ray Trace Acceleration... but it might be a bit more versatile than that knowing AMD. 

This could quite as easily be used for RT Acceleration, as it could Traditional Pipeline Acceleration, or Machine Learning (Tensor). I wouldn't put it past AMD to have a more "All-in-One" Solution to counter NVIDIA., that is after all what they did with GCN. 

And the thing is... AMD could still put out something even stronger, as they're no longer limited by the Architecture in said regards, like they were with GCN (where larger cards basically HAD to be Multi-GPU).

0 Likes

There's also the arbitrary price adjustment at the end, where a company decides if they want to balance the features, performance, and power consumption of their product against a competitor.

They could take a lower margin, or even a complete loss (as Intel was known to do with mobile CPUs for a number of years) to increase market share with the intent of reaping a higher margin down the road. This is what AMD did with Ryzen processors, and has resulted in several fold increase in market share and allowed them to increase prices relative to Zen 1000 series. This is also what AMD did with the last many generations of GPUs, competing with a technically inferior product than nVidia, but also pricing them lower (mining explosion notwithstanding), so custom editions still were less than nVidia counterparts.

The second option is to not adjust the margin at all, which is what we are seeing with the RX 5000 series, and pretty much what we saw with the embarrassment that was Vega, so technically inferior products are competing against technically superior ones, or even out of their class, as we see with the RX 5500XT, where instead of the GTX series, it's actually against the much faster RTX series price wise. In a market where AMD barely controls double digit share and the other competitor has no reason to lower prices, this is a lose for AMD, their board partners (especially exclusive partners Sapphire and XFX), and the consumer.

<hr>

I think with nVidia's next generation, they will probably use Samsung for the low to mid-mid range, and TSMC for the upper-mid to high end, mostly because TSMC's processes will be more advanced and crucial in an area where heat and power consumption are major factors, whereas at the other end cost matters most. I could see them handling their mobile chips as well.

Honestly I don't see Navi 21 being twice the performance of the 5700XT, because the last time AMD doubled the power of their cards, the Terrascale 1 HD 4000 series to the Terrascale 2 HD 5000 series, there was not a linear increase in performance, it was about 40-70% depending on the game. Granted Navi 21 is supposed to be an evolutionary redesign instead of a generational redesign, I still think they're going to have a tough time averaging a 50% performance improvement, chiefly because there is not a major node reduction to greatly cut power consumption and heat production. Now, this will mean they're in RTX 2080Ti territory, BUT if nVidia is able to increase their successor cards by the same amount, they're going to be extremely outclassed yet again.

I don't see AMD using HBM2 over GDDR6 because of the much higher cost of HBM2, as they will be competing against two year old architecture by the time it releases, and they don't want to have a repeat of the Vega packaging fiasco.

Combine all this with the fact that AMD's drivers have not been up to snuff with nVidia's for years, and the fact nVidia will still have ReShade, a highly touted feature by gamers as it makes classic and even new games look much more detailed, and the facts that high end cards are a very high margin, low share market segment, and new AMD mid range cards featuring ray tracing will not see the light of day until 2021, and outside the ultra high end market, it's going to be a tough sell for most people to choose AMD over nVidia, even people like me who loathe the thought of using an nVidia card.

0 Likes

black_zion wrote:

There's also the arbitrary price adjustment at the end, where a company decides if they want to balance the features, performance, and power consumption of their product against a competitor.

Sure., what I was doing was pointing out just how LARGE said "Adjustment" is for AMD with Navi. 

Historically, AMD have always run the 15% Rule; that is that their Hardware MSRP is 15% Above Production Costs; this means that AMD and AIB Partner both receive 15% Profitability Margins when selling at MSRP. 

Now as I understand it with Navi., the AIB Partner Profitability is still the same 15% Mark Up; and instead AMD have set the MSRP selling the Die considerably higher to make up the difference. This has left their Board Partners easily quite baffled and angry... especially as Sapphire and ASUS were right about the initial MSRP proposed for both the RX 5700 Series and RX 5500 Series.

Where AMD initially wanted to sell them... the RX 5700 would've been a "Hard Sell" (but possible) while the RX 5500 simply wouldn't sell. 

Even where it is., I have to argue strongly that they're not going to be selling many units. 

MAYBE some of this is TSMC raising the prices of Production, due to High Demand (thanks to Apple) and Low Production Queues (given 7nm is still in early adoption)… but a situation where it's 2-3x more expensive than it should be? 

And that said prices just so happen to then align with AMD's previous offerings? 

Yeah, I'm sorry but I just don't bloody buy that.

But what's more strange about all of this... is by AMD doing this, they're making what should be Hardware that could repeat their success with Ryzen. 

I mean seriously... 

Imagine for a second that the MSRP were:

RX 5500 XT / 4GB • $150 

RX 5600 XT / 6GB • $220

RX 5700 XT / 8GB • $280

(knock off $20 - 25 for the Non-XT Variants, that are deliberately downclocked) 

They'd still be competing with the RTX 2060, RTX 2070 and RTX 2080 … that wouldn't change, but in terms of prices NVIDIA would be looking like Intel; offering sure the arguably "Faster" GPU in certain scenarios; but at 1.5 - 2.0x the Price. 

There's nothing strictly wrong with AMD pricing competitively when the Hardware is frankly so competitive this generation., but the problem here is NVIDIA still commands the lion share of the Market. What AMD should be focused on is creating their own substantial growth within the Graphics Market., THEN once they have say 30 - 40%; and seen as equals with their own "Core" AMD-Only Audience … THEN they can start pricing more competitively and being worse value. 

• 

Honestly I don't see Navi 21 being twice the performance of the 5700XT, because the last time AMD doubled the power of their cards, the Terrascale 1 HD 4000 series to the Terrascale 2 HD 5000 series, there was not a linear increase in performance, it was about 40-70% depending on the game. Granted Navi 21 is supposed to be an evolutionary redesign instead of a generational redesign, I still think they're going to have a tough time averaging a 50% performance improvement, chiefly because there is not a major node reduction to greatly cut power consumption and heat production.

Navi 20/21 isn't a new Architecture... hence why I'm saying it'll be called the RX 5900 XT., all they're doing is creating a "Big" Navi 10 (by literally doubling the Compute Unit Count).

The benefits of using 7nm EUV (7nm+) over 7nm is that they have some spare Silicon Space … either for more Compute Units, or something else; but also an improvement in Power Efficiency. This likely will mean that instead of it being a 375w Card,. it'll be 325w (assuming 16GB GDDR6).

Would this be higher than the RTX 2080 Ti … sure but also keep in mind an 80 RCU Navi, will also be a good 10-15% Faster than the RTX 2080 Ti in Traditional Pipeline Games.

As noted they'll have spare space on the Silicon that'll be filled with something., rumour is that it'll be Hardware Ray Tracing (or at least something that can support DXR even if it isn't specifically designed for such). 

But how "Good" that will be is a matter for debate... sure both the PlayStation 5 and Xbox X-Series have HWRT Support; but that's being provided by Imagination Technologies not AMD, which honestly their Ray Tracing Hardware is quite impressive; maybe that is something AMD are also Licensing instead of Developing their own. 

Still in any case... I'm still standing by my original comments a year ago, that Real-Time Ray Tracing IS NOT the direction that Real-Time Graphics is headed in. '

Sure, for Visual Effects (i.e. Blender, Maya, etc.) it is; but when it comes to gaming there are much more impressive and proven technologies; including what AMD themselves first introduced back in 2005 with their Node-Based Global Illumination approach, prior to them abandoning it in-favour of Forward+ 

As it frankly better suits Modern Graphics Architectures, with better Scalability. And has a proven track records, as it's been a core component of the Snowdrop Engine for the past 6 years. Then there's the variant used by Crytech, which utilises Temporal Frames and said Technique to produce Real-Time World Space Reflections … which doesn't need dedicated acceleration hardware.

What's going to matter over the next few years, is IF NVIDIA actually push Hardware-Accelerated Ray-Tracing as a Standard. 

Otherwise their focus and shift to such is going to ultimately be detrimental to their continued success., as AMD can at this point has Architecture that can steadily provide meaningful performance improvements for the foreseeable future via just scaling up. NVIDIA was already hitting their limits in said regards, it's a major reason they're "Changing the Game" as it were. 

AMD as a result have the ability to pull off another Ryzen... to take command of the market., maybe not dominate but certainly command it. 

Yet their actions seem to showcase that they WANT to remain "The Alternative NVIDIA"; which is ridiculous given AMD/ATI are the ones who have created damn near every major Graphics Innovation in the past Generation., NVIDIA were always just better at marketing and acquiring what they needed to keep an edge. 

0 Likes

Imagine for a second that the MSRP were:

RX 5500 XT / 4GB • $150 

RX 5600 XT / 6GB • $220

RX 5700 XT / 8GB • $280

That's been mine, and many others, biggest complaint since the 5700XT's launch. The 5700XT effectively took the place of the 580/590, yet carried a -massive- price premium compared to their predecessors for no other reason that they wanted to price match nVidia, and that was continued with the 5500XT and upcoming 5600/XT, and no doubt will also happen with Big Navi. Not only is that a big "F YOU!" to consumers, it also ensures AMD's market share remains abysmally low, which is not at all aided by their driver situation as well as lack of feature support, as well as AMD's lack of response. This will set AMD up in a horrid situation, given how Intel is going to be pouring a massive amount of resources into their GPU division, as it's a major part of their plan to expand into other markets, especially as they face the very real possibility of being pushed out of every OEM machine as Intel will make their own cards.

https://www.thehindubusinessline.com/info-tech/intels-india-team-pushes-ahead-with-gpu-roll-out-plan/article30469419.ece

Ray tracing is the future for sure, it's not a fad like 3D, especially since Adobe and nVidia are pushing it into Adobe Creative Cloud and After Effects, among other places, and the benefits there extend from Hollywood blockbusters all the way down to small businesses, from pretty much every field of science to the military. The problem for AMD is that they're not so much late to the game, but they're betting big on their hybrid ray tracing solution working better than nVidia's dedicated cores, and while I think this may work better for consumers, it rules out the possibility of a professional grade dedicated ray tracing accelerator card, which may come to bite them.

0 Likes

AMD and NVIDIA should both support DXR as the API of choice. Given shaders are designed for DX12 now, the API presented can support a ray cast through to ray tracing and anything else desired. Just remember that if the game is too demanding, sales will suffer.

My roll your own ray casting works fine and I am able to scale it to insane resolutions with a proportional increase in GFLOPS needed. In < 2000 lines its possible to do this but the sprite logic is a tad more work.

0 Likes