The market, it seems, isn’t terribly impressed with Nvidia’s recent RTX family, either. Shares of the company’s stock have slipped in recent days, particularly Friday, after Morgan Stanley said the performance of the latest RTX video cards hadn’t met its expectations.
“As review embargos broke for the new gaming products, performance improvements in older games is not the leap we had initially hoped for,” Morgan Stanley analyst Joseph Moore said in a note to clients on Thursday according to CNBC. “Performance boost on older games that do not incorporate advanced features is somewhat below our initial expectations, and review recommendations are mixed given higher price points.
“We are surprised that the RTX 2080 is only slightly better than the 1080 Ti, which has been available for over a year and is slightly less expensive,” he said. “With higher clock speeds, higher core count, and 40 percent higher memory bandwidth, we had expected a bigger boost.”
As with Micron on Friday, the fundamentals of Nvidia’s gaming business are expected to stay healthy and the $273 target price that Moore set on the stock reinforces that interpretation. There’s no serious risk here, as far as a downturn in Nvidia’s GPU business, because Nvidia currently has a lock on the high end of the market. The problem is, the company hasn’t articulated a good reason for anyone who isn’t in the market for a $1,200 GPU to upgrade.
Focusing so heavily on the RTX 2080 Ti may have been a double-edged sword. On the one hand, this massive GPU can crank out high frame rates and drive 4K at or above 60fps. On the other, it’s $500 more than the flagship it replaces. A 1.71x price increase from the GTX 1080 Ti to the RTX 2080 Ti for a ~1.3x performance increase is a lousy deal, but it’s virtually the only deal in the stack when the 1080 Ti is almost as fast as the RTX 2080 with more RAM and a lower price point. The new cards are also less power efficient, with substantially higher power consumption even when not using their additional resources to do useful work in games.
The really interesting question in all this is whether the Turing family should even be considered a real GPU “family” at all. With 7nm ramping at TSMC, it’s fair to ask if Nvidia really plans to ramp an entire new set of cores on 12nm, including a new RTX 2060 and 2050 (parts we haven’t even heard of yet), or if the goal is to launch just new high-end chips for the 2070, 2080, and 2080 Ti, before following up with a new 7nm family in 2019 or 2020. Either move would make Pascal the longest-lived high-end GPU family on record, but with AMD lagging and Intel not expected in-market until 2020, Nvidia has time to plan its moves carefully. The company’s consumer upside on Turing may be limited, but that likely won’t matter with strong Tesla adoption and continued growth in the HPC market.
Nvidia Shares Skid on Disappointing RTX GPU Launch - ExtremeTech
I took a glance at Nvidia Forum and it seems like the latest Driver for the RTX (4xx.xx) is crashing a lot of computers.
You can do that with every driver. The funny thing is that the golden drivers, the ones that are considered the best and stable, etc..., had just as many people complaining in the forums when the driver was released.
2080 ti is delayed, and it's looking like another gpu paper release. The nvidia forums are screaming about that.
The only increase in performance is the 2080 TI (25%-35% increase over 1080ti). As noted in article, 2080 == 1080 TI but for more money. The new RTX feature is not very usable because of frame rates. Plus, MS needs to release the API for it to be utilized. So for $1200-$1500 you can get a bit faster performance over 1080 TI. Looking at the benchmarks between the two, 2080 ti is definitely better, but I didn't think it was game changing or worth the cost.
I am hoping the market just laughs at nvidia. Also, that AMD can release a card soon that can compete, and at least slightly exceed, with the 1080 TI.
Here in Canada, the 2080 is about the same price as the 1080ti (around $1000). So, if you were dithering about buying a 1080ti, you might as well go for a 2080 and essentially get the ray tracing thrown in for free. If ray tracing turns out to be a flop then you've not really lost anything except power consumption and a few worse benchmark results. It's a shame that Navi isn't ready to launch NOW while everyone's pissed at NVIDIA. A $300-400 card with 1080 performance would fly off the shelves.
I think NVidia was pretty clever about shifting the naming convention. If you simply change the name of RTX 2080 to the RTX 2080 Ti and also change the name of the RTX 2080 Ti to the RTX Titan, the names and costs match up with previous generations. By shifting the name, a user can say, well the RTX 2080 is 30% faster than the GTX 1080! It sure is, but it is virtually identical to the GTX 1080 Ti which it matches cost wise.
At the end of the day, this really amounts to a technology launch, where the new GPUs are more or less performance identical per dollar (or slightly worse) that the 1000 series in legacy games and worse in power consumption. But for the extra money, and electric bill you now get ray tracing capability through NVidia's special algorithm. Whether or not that will be worth it is totally up in the air. NVidia then added a whole new tier of elite GPU with a price point to match to make this look more like a full product launch vs just a ray tracing roll out.
They're actually up at this time, but it's bloody scary to see investment organizations and banks talk about a GPU launch. AMD hasn't even hinted their position of "No 7nm Vega for gamers" is changing, and there's no reason to expect it to with the memory shortage (which will continue in 2019, thank you Samsung, go to Hell) meaning AMD will funnel all available chips into the high margin professional sector (especially if the rumors of 7nm Vega delivering stratospheric performance are true, 25TFLOPS of power), and even if the rumor mill is true and a new "RX 590" chip will be launched, it's not going to be anywhere near the position to challenge nVidia as it will certainly slot in between the RX 580 and Vega 56, likely being about 25% faster than the former and 25% slower than the latter, and costing near the same as the 1070Ti, which will be much faster.
AMD hasn't said anything of the kind.
Instead I think what Lisa Su stated has been misinterpreted., because what was said was 7nm Vega would be ready for launch as Radeon Instinct (AMDs Server Cards) in Q3-Q4 2018... she also stated (clearly) that AMD had not forgotten about gamer / gaming cards., and that there will be more new (in the near future) about 7nm Radeon Gaming Cards.
Now one interpretation of this (that the Tech Media has simply run with) is that there will be no Vega 7nm Gaming Cards.
This simply isn't what was said, and from a practical standpoint it makes sense too.
7nm will for the moment only be able to provide a small supply as well as likely still not having ideal yields within the first production run., in-fact I'd wager AMD wishes to avoid a repeat of Ryzen 1st Gen and Polaris 1st Gen... it didn't take long (6 months) for them to resolve the initial production teething problems.
Ryzen at said point was already committed meaning the "improvements" would only be capable of being applied to 2nd Gen., while Polaris saw a complete refresh from RX 400 to RX 500 in < 6 months.
I'm not sure I agree with how AMD handled that., but what's done is done.
Why go this route of "Server Consumers, first?" … well keep in mind the price ranges that each Market will be willing to pay.
Retail Consumer • $600 - 900
Professional Consumer • $1200 - 1800
Server Consumer • $2000 - 5000
Remember that for the Server Consumer., price actually isn't the major factor... rather it's the Total-Cost-Over-Time Vs. Performance.
As such, because they're willing to pay a higher premium, they can accord to bin chips more aggressively than needing to keep most of them... I also stated on various gaming channels that Vega 7nm is-and-isn't Monolithic.
What I mean by this is for Radeon Instinct, it WILL be a Monolithic Chip., as well it's quicker to produce and bin... but conversely is more expensive for Ramped Production such-as the numbers required for the Professional and Retail Markets. This is why I'd wager heavily that RX Vega / Navi will be Chiplet (similar to Ryzen), as this means they can departmentalised production to get better numbers... cheaper production lines... better and less wasteful binning process... plus most importantly of all Scalability for the Range of Production Lines.
Keep in mind that Vega 7nm for Radeon Instinct is a SINGLE Product., not an entire Production Series. The RX "600" Series will (assuming a Complete Range) will be 9 Distinct Products. In-fact I'd go further than this and make some very (confidence) precise predictions:
Entry Level: (Mid-Late June 2019 $75 - 150 / 212mm² / 2-4GB / < 110w)
RX 630 (RX Vega 16) • 1350MHz / 11CU
RX 640 (RX Vega 24) • 1400MHz / 16CU
RX 650 (RX Vega 32) • 1350MHz / 22CU
Mainstream: (Mid-Late April 2019 $180 - 350 / 320mm² / 4-8GB / < 225w)
RX 660 (RX Vega 44) • 1460MHz / 32CU
RX 670 (RX Vega 56) • 1380MHz / 40CU
RX 680 (RX Vega 64) • 1350MHz / 44CU
Premium: (Mid-Late August 2019 / $400 - 800 / 450mm² / 8-16GB / < 350w)
RFX 690 (RX Navi 80) • 1400MHz / 40CU
RFX 695 (RX Navi 112) • 1400MHz / 56CU
RFX 699 (RX Navi 128) • 1400MHz / 64CU
In terms of the Memory., I'd actually wager a Hybrid Interface High-Bandwidth Cache...
Entry Level: 16MB High-Bandwidth Cache / 1GB (4x256MB) HBM2 / 1-3GB GDDR6
Mainstream: 48MB High-Bandwidth Cache / 2GB (4x512MB) HBM2 / 2-6GB GDDR6
Premium: 96MB High-Bandwidth Cache / 4GB (4x1GB) HBM2 / 4-12GB GDDR6 and|or NVM SSD.
Of course they could just go with a pure GDDR6 … but with this said., this "Hybrid" approach actually would allow them to use GDDR5 instead, which could potentially be cheaper and outsourced to Global Foundries (Remember how they've Re-Tooled for SoC and Memory Production?)
This might however only be limited to RX Navi itself., although I think AMD will see an opportunity to lower Costs (especially with groups like Samsung, Micron and SK Hynix current content at keeping Memory at a premium) … then it makes sense for them to cease said opportunity. It might after all be the only way they can get the RX 680 at under $350., which it MUST be.
I'd also wager that they'll all be capable of 1550 - 1750MHz "Effective" Frequencies., which could beg the question as to why I'm suggesting their "Stock" Clocks will be substantially lower.
Simple AMD will still want to keep a good Performance / Watt., but more than this NVIDIA typically gets a lot of praise from the Scores their Overclocked Cards can achieve over Stock... so increasing the potential Overclock from 6-10% to 12-20%, even if it becomes disgustingly inefficient would allow them to market said "greatly improved efficiency" (which they like to do) while also getting some love from said Overclocking / Enthusiast Community in regards to how far their hardware can be pushed (with similar gains as you'd see from NVIDIA GTX 10-Series / RTX 20-Series).
Of course I could be completely wrong, we'll see in a few months.
•
As for the GeForce RTX 20-Series being "Disappointing"., honestly... like RX Vega, in terms of Generation-on-Generation Performance improvement and you won't hear me say this often, NVIDIA is perfectly where most Generational GPU Performance Gains would be.
See, the issue isn't that the RTX 20-Series is actually disappointing., but rather it's being compared to the GTX 10-Series... which was an unusually larger than normal performance leap (+13.5% above the Trend).
What's more is NVIDIA then proceeded to release a "Full" GP102., which is notable because the RX Vega 64 (64CU) isn't as large as Vega could've been.
AMD was perfectly capable of Producing an RX Vega 80 (80CU) … likely at 1400-1500MHz., which could've comfortably competed with the GTX 1080Ti / Titan Xp; being somewhere between the two; but AMD didn't produce such a GPU. Why? Well because it was stupid to.
Actually look at the GTX 10-Series carefully., what do you see? What I see are two Generations (GTX 10-Series and GTX 11-Series) essentially merged into a single and frankly messy product line.
Now why did they merge the two generations? Well if I had to guess, based upon their actions over 2017 and 2018... essentially I think they're desperately trying to establish that AMD are simply "Uncompetitive" to drive their Market Dominance to a point where the transition to Low-Level APIs would result in better performance for their RTX Architecture while retaining a lead over GCN (which is better suited to Low-Level APIs)., which they could do with a Node Shrink (to say 10nm).
This would've allowed them to offset the performance difference., while being able to keep similarly sized GPU Dies.
Now as I noted., I think the RTX 20-Series is a perfectly reasonable performance uplift... and the RTX features have potential, but like GCN they rely too much on external adoption of new approaches. Gameworks will of course help ease this, but still you can't force new Features on Developers., and it doesn't matter how deep NVIDIA believes their pockets are; they can't keep paying AAA Studios (who'll have to create a separate AMD Pipeline anyway for Consoles) to basically favour their Hardware on PC.
NVIDIA holding back Low-Level APIs, might've benefitted them short-term (GTX 9-Series and 10-Series) but at the same time., had they NOT done this, then AMD would've gained a more dominant position resulting in THEM becoming more complacent that would've better allowed NVIDIA to then introduce new Technology.
And this is something that happened in the past., with the GeForce 8-Series and the introduction of CUDA... what's more it was introduced with an immediate usage and improvement to games via PhysX.
They're doing the exact same marketing this time around, but as noted... back with the GeForce 5 / 6 / 7-Series., NVIDIA were in some ways the Underdog.
That becomes a very powerful marketing tool, especially to the General Consumers... who then buy based upon the potential not the actual performance.
In fact performance wise the HD 2000 Series Vs. GTS 8-Series wasn't quite the "White Wash" as people curiously seem to believe it was.
It's interesting what CUDA and PhysX did to peoples memories.
On top of this I think NVIDIA lucked out with their Partnership with Epic., as Unreal Engine 3 would go on to become the "Go To" Middleware of the Mid-Late 2000s.
Again they're pushing RTX via Unreal Engine 4., but today it's actually being rapidly replaced by Unity 3D (or a return to Proprietary Engines) meaning it's far less effective to push their technology.
I was quite vocal during the release of Polaris and Vega., that what AMD had planned to do was force / trick NVIDIA into revealing their hand.
NVIDIA have now done this and what has AMD's response been?
Silence.
Because they can't respond? That's not the vibe I'm getting... had they been, they actually would've either focused more on their 7nm Gaming Plans at GDC *or* they would've had a prepared announcement to try to take the wind out of the RTX announcement., heck at this point they KNOW they can do this on Rumour alone, given how Polaris and Vega were severely over-hyped by the Media.
Instead what we've had in terms of official statements is that "AMD is (Very) Confident about 2019", which sure could mean just Ryzen., but again I don't think so, else that's what they'd have directly referenced.
If my predictions (at the top) are correct about the RX "600" Series., well wouldn't YOU be confident as AMD about 2019?
Lisa Su confirms no 7nm Vega for Radeon series, will not compete against nVidia until 2019 with Navi
And no I am decidedly not confident in AMD's competitiveness against nVidia until at least 2020, if then, because the XBOX Scarlet and PS5 won't be able to run all games at 4k60, meaning that Navi isn't going to be as powerful as Vega so the new generation mid range isn't going to be last year's high end, and with AMD's "leapfrogging design teams", a replacement for Vega won't be released until 2020. All this combined with the memory shortage which will continue through the future means graphics card prices are going to also continue to be in the stratosphere, especially with brand new memory technologies, GDDR6 and HBM3, going to be used on 7nm graphics products. It's a repeat of the dark ages of 2004-2008 where nVidia was practically uncontested.
https://segmentnext.com/2018/09/25/ps5-xbox-scarlet-4k-60-fps/
"AMD wishes to avoid a repeat of Ryzen 1st Gen and Polaris 1st Gen... it didn't take long (6 months) for them to resolve the initial production teething problems."
Ryzen 1st gen had teething problems? From what I remember the processors were widely available from launch on. The motherboards on there other hand were another matter.
"while Polaris saw a complete refresh from RX 400 to RX 500 in < 6 months"
I think the RX 500 series launched April of 2017, about 10 months after the RX 400 series.
"Keep in mind that Vega 7nm for Radeon Instinct is a SINGLE Product., not an entire Production Series."
Isn't the current iteration of Vega both? The Vega Instinct, Pro, RX, etc. are all the same chip. Why would we expect things to be different in 7nm?
It is interesting that you have Navi as the elite product over Vega, when most of the information currently available suggests that Navi will replace the Polaris series, while Vega will remain the high end for the foreseeable future.
"NVIDIA have now done this and what has AMD's response been? Silence."
Well, it does sound like AMD is going to force out a Polaris 30 update next month. Not exactly silence if it turns out to be true.
@black_zion •
AMD at Computex 2018 - YouTube (1:29:38 - 1:29:46) it's a single sentence:
Lisa Su - "As excited as we are for the Radeon Instinct 7nm GPU, for all of you Gamers out there... we are definitely bringing 7nm GPUs to Gaming as well. So stay tuned on that."
It's quite interesting to see how this was spun in the Tech Media and Rumour Mill at the time.
This was then followed up with "Navi was designed exclusively for Sony" … which would've been impressive, as AMD first list Navi as 'Next-Gen' in their December 2015 Roadmap (from an Internal August 2015 Roadmap).
While I have no doubt that Sony were the first to place an order for Navi-Based Custom GCN., keep in mind that "Custom" essentially means there is no Retail SKU., as such it's a "Custom Production SKU" for said Customer... which both Polaris 10 Console GPU are or were (see: Radeon Pro WX5100).
•
Now going further than this... of course the Xbox Two and PlayStation 5 (later of which will likely have a Q4 2019 Launch, with Xbox Two following in Q3 2020) aren't going to be capable of 2160p60 in "All Games"., I never said they would.
Instead I would suggest that they will both use Navi 32 at 1150MHz, this would provide 9.42TF (FP32) or RX Vega 56 (14nm Stock) Performance., which will be on a Custom (Monolithic) SKU possibly 12nm instead of 7nm; but would still be ~220mm² and likely use 45-60w (GPU only), which paired with a Ryzen 2nd Gen (12nm) 8 Cores / 16 Thread at say 2.8 - 3.2GHz … would result in a combined 95w Component.
It's also likely they used HUMA (Hybrid Unified Memory Architecture) to use High-Bandwidth Cache, High-Bandwidth "Fast" Memory and NVM SSD "Slow" Memory., omitting GDDR or DDR. Providing an 'Effective' 32GB System Memory., 32GB Core (OS)., 64GB StoreMI (128GB Total) with a 1TB SATA HDD.
I'd say the HBM2 will likely be 2GB "Scratch" Memory., while the HBCC will be maybe 48MB.
This would make the most sense as Console Generations typically double in performance Gen-on-Gen., and for this to be the case PS4 Pro to PS5... we're talking > 8.4TF Performance; while for the Xbox One X to Xbox Two S we're talking > 11.6TF. This places my prediction (and as a note, I didn't even really consider this too much before you brought it up) fairly bang-on for what kind of performance we should expect to see from them.
As a further note here., keep in mind where the rumours of "Navi is Mainstream" is coming from.
It's directly in association with the Sony "Leaks" / "Rumours" … which I'd argue a 220mm² Die is DEFINATELY Mainstream, and that in a Discreet Package would likely use ~120-135w when you include Display Outputs, Memory and Fan Power Consumption.
•
In regards to the Memory Shortages., let me cast you mind back to the Global Foundries announcement last month that they've ceased production of all AMD GPU / CPU and are instead Re-Tooling for 22nm / 14nm ASIC and Memory Production. AMD still owns the Controlling Interest,. as well as co-owns the HBM Technology along with GDDR5 being phased out for GDDR6 (thus cheap license for End-of-Life Products).
Sounds like a perfect Candidate for AMD to source all of their Memory from, with 3 Guaranteed Clientele., Sony, Microsoft and that Chinese Custom-AMD Solution (forget the name).
ajlueke •
"Ryzen 1st gen had teething problems? From what I remember the processors were widely available from launch on. The motherboards on there other hand were another matter."
Availability wasn't the issue... Missing or Broken Features were.
It's why Raven Ridge was delayed until Ryzen 2nd Gen., as without Precision Boost working (as intended) … it simply couldn't operate within the Power Envelope it needed to in order to compete with the Intel Core U-Series.
This isn't even getting into the fact that most were simply unable to reach the intended 4.0GHz Launch Frequency., or the various other Broken / Missing Features such-as SenseMI (for Over-Time increased Performance and IPC).
The same is true for both Polaris and Vega on 14nm... basically most of the Enhanced Features were missing, broken or disabled., not to mention the lower clocks than was intended.
"I think the RX 500 series launched April of 2017, about 10 months after the RX 400 series."
RX 400-Series launched Late-August / Early-September 2016.
RX 500-Series launched Early-April 2017.
That's 6-7 months... bare in mind that AMD released Iterations 12 months, and Architectures 24 Months apart.
The codification of "Leap Frog" Development is merely the promise it means every alternative year they'll be focused on Processor and Graphics., but it doesn't change their release Schedules or Roadmaps.
"Isn't the current iteration of Vega both? The Vega Instinct, Pro, RX, etc. are all the same chip. Why would we expect things to be different in 7nm?"
Strictly speaking, you're right in a way... as Vega 7nm is more of a Test Bed, given it's a Componentised Production Queue.
The CU are 7nm, the SoC is 12nm, the Memory Components are 14nm, etc. but with this said AMD are also still focused on producing with On-Die HBM.
It makes it good as a Test Bed or an Isolated / Showcase Product., but not as a Full Production Product (as the Retail Radeon have to be).
"It is interesting that you have Navi as the elite product over Vega, when most of the information currently available suggests that Navi will replace the Polaris series, while Vega will remain the high end for the foreseeable future."
AMD themselves expressly pointed out that Navi would release in 2H 2019 … they also rebranded all of the Integrated Graphics Solutions as Vega Graphics instead of keeping the R3/R5/R7 Graphics or even calling them Polaris Graphics (which feature wise they are).
So I think it's clear that the RX "600" Series will be called RX VEGA., they've invest too much in their rebranding effort.
Now if we also follow the fact that AMD has carefully established a pattern for their Release., that being:
Mainstream Primary, Budget Secondary, Professional (Prosumer) Tertiary … with Navi firmly pencilled in within the Prosumer / Professional Release Schedule.
And it isn't just Ryzen they did this with.
Notice how RX 500 Series (and even the 500X-Series OEM Rebrand) occurred in Q1-Q2 2017 and 2018... RX 540/550 launched in Q2-Q3 2017... RX 520/530 in Q2-Q3 2018... RX Vega FE, 56, 64, WX9100 in August 2017... which falls exactly in-line with the Ryzen release schedules that they've had.
This strongly suggests that Navi (Desktop) is NOT aimed at Mainstream but instead will be launched as a Prosumer / RX Vega 56-64-FE Replacement., because otherwise where will it sit in terms of performance?
Look at it like this... let's say they release a Navi 36 (RX 680)., it'll be capable of 1700MHz and let's also say it literally has no IPC Gains over Polaris / Vega.
Well that ends up being 1.19x more performance over the RX 580, but on 7nm it would be a 160mm² GPU... almost half the size; so, what... it'll be a $150 Product?
You think that'll replace the RX 580?
Keep in mind that a 44CU Navi essentially is the same size as the RX 580... it'll still be capable of up to 1700MHz, this would increase the performance to 1.46x.
This as a keynote is on-par with RX Vega 56 at this point.
Now I'd argue that the Athlon 200GE is what we need to look closely at here., as it has usually higher performance than what it should have comparing it to the Ryzen with Graphics... between 15-60% Performance improvements depending on the Application; what evens out to ~30% Avg. Performance Improvement.
This isn't comparing Fiji to Polaris / Vega (which remember saw very little meaningful performance uplift., instead it was purely Higher Clocks and Memory that resulted in the better performance)., we're comparing what should be Apples-to-Apples.
Same CPU Architecture, Same GPU Architecture... and at a LOWER (Locked) Frequency. Yet, still we have this unexplainable +30% Avg. Performance.
This however should sound familiar... because this is EXACTLY what AMD was promising Ryzen with Graphics Clock-for-Clock would have in terms of performance over Excavator with Fiji. It really didn't quite work out that way., as Vega 8 Vs. Fiji 8 was essentially the same performance at the same clocks.
And remember this is the EXACT missing performance from Primitive Discard essentially being "Missing-in-Action" as a Vega Feature., i.e. they finally got it bloody working (awesome).
As such we can expect the RX "600" Series to showcase a SIMILAR +30% Avg. Performance Uplift.
(and yes, I'm aware I've listed Navi as essentially a 2x Performance Improvement., but there's a reason behind that based upon the Vega Pre-Launch Information from AMD)
The result being that we'll see more CU / mm² (approx. 35-40%) along with the Primitive Discard +30% Avg., this results in an overall Generation-on-Generation 65-70% Performance Uplift... at least if we're talking about "Maxed" Out Clocks., AMD however will pull that back to more reasonable 1400MHz; thus my predictive Performance, CU and Power.
That would appear to be essentially AMD "High-End" becoming their "Mid-Range"... but remember NVIDIA pushed the performance leap higher than it should've been., as a result AMD really has little choice but to make said dramatic leap this generation, with a more moderate one moving to Navi.
"Well, it does sound like AMD is going to force out a Polaris 30 update next month. Not exactly silence if it turns out to be true."
Where the heck you hear that from? The only "Polaris" Product they released recently was the WX5100, and technically speaking that's just the PS4 Pro GPU., which would make sense that they have an excess stock of if Sony is gearing up for the PS5 Launch in Q3-Q4 2019... as they likely have more than enough to cover the rest of the year.
So, most of your speculation on future products is based on this unexplained uptick in performance of the Athlon 200GE, which potentially indicates that primitive discard is now working. What data are you basing the +30% performance gain of the Athlon 200GE on? I haven't been able to find anything that would indicate it has 30% higher performance than what we would expect vs the other Ryzen chips with Vega graphics.
As for the Navi as mainstream only rumor, I will agree that the rumors never made much sense to me. The rumors seemed to indicate that Navi would bring GTX 1080 performance in at $250. While all well and good, what does that mean for Vega? Vega would then be a more expensive high end GPU that performs on par with the new mainstream. So, without at least releasing a 7nm Vega refresh, that move doesn't make any logical sense.
Not to be a stickler, but I totally am. Didn't the RX 480 launch June 29th 2016? Or am I missing something here?
And finally, concerning the Polaris rumor.
AMD May Be Prepping New Polaris 30 GPUs For October Launch - ExtremeTech
Remember what AMD said: leapfrogging design teams so mainstream cards would release one year, high end the next, and alternate back and forth, which was the main reason for splitting off the high end series designations to their own brands (Fury, Vega). Also remember that Navi was ALWAYS intended to be a mainstream replacement for Polaris and to release in 2018, but also remember that Raja was massively behind schedule when he left the company. Lisa Su directly controls RTG, and has dedicated the vast majority of resources to the semi custom market, which effectively ties new graphics card releases to the semi custom market. Navi will power XBOX Scarlet and PS5, and with them slipping to a 2019 release date, AMD adjusted to enable a 7nm Vega for the professional market as it (quite likely) has an incredible performance advantage over 14nm Vega due to deep learning and other professional features which were slated to be included in 14nm Vega to compete against nVidia Tesla but did not make it in in time, while slipping Navi to 2019. "Polaris 30", if it indeed does exist, and it's not unreasonable to assume it won't considering TSMC's 7nm process is apparently quite mature already, will not be anything special compared to existing cards, as it is just a refinement of existing technology, nothing that will make or break playability, but will improve performance per watt as well as reduce manufacturing cost. Prices will still remain high due to the RAM shortage, AMD continuing to rely on valueless "free games" to sell graphics cards instead of price reductions (seriously, a $300 card with "free games valued at" $60 is still a $300 card), the high performance of the GTX 1060 and 1070, and the Mount Everest size pile of GPUs nVidia is sitting on due to overproduction during the cryptocurrency boom allowing nVidia to flood the market with cheaper GTX 1000 series GPUs.
Old:
New:
Added: