Well, it's official: Processor Specifications | AMD
Ryzen 5 3600 6C/12T 3.6 / 4.2 GHz 65W 32MB L3 Cache
Ryzen 5 3600X 6C/12T 3.8 / 4.4 GHz 95W 32MB L3 Cache
Ryzen 7 3700X 8C/16T 3.6 / 4.4 GHz 65W 32MB L3 Cache
Ryzen 7 3800X 8C/16T 3.9 / 4.5 GHz 105W 32MB L3 Cache
Ryzen 9 3900X 12C/24T 3.8 / 4.6 GHz 105W 64MB L3 Cache
which means the rumors were off a bit on freq.
at least now the part numbers are available but pricing is still not known
the 40 PCIe lanes on the top processor is nice
Thought I'd post these here, official from AMD:
Really anxious to see some independent benchmarks.
Aye, Lisa Su threw around 3 statements during the keynote which will make the benchmarks interesting:
More details are supposed to be revealed at E3 in 10 days, and with a launch date just over a month away, they should be in the hands of the reputable reviewers shortly. That X370 "maybe" is going to tick a lot of people off, especially since this may be the end of the Socket AM4 line, and nobody wants to buy a board for one model, unless they're Intel.
I believe the "maybe" is due to power requirements.
Doubt its power requirements, most X370 boards were built to overclock and can provide the needed power, though some of the lower end X370 boards may be iffy on that.
Another thing to think about is how Windows 10, especially OEM editions, will react to a motherboard change.
A valid concern. Usually you wind up having to call Microsoft to get the OS activated again.
Well some of us use linux, and going to R9 (12 cores) for number crunching would be worth the expense.
The benchmarks will be really interesting indeed. While, an IPC bump is nice, 15% isn't really enough to warrant a upgrade. That L1 cache bump may yield higher performance especially in applications that have to read from RAM a lot.
It's not just the IPC upgrade, it's the ability to have your first PCIe slot run at PCIe 4.0 that can provide the incentive for someone to upgrade from a first generation Ryzen on an X370 board. Processor wise it's about a 20% performance jump, that's not small, but the main benefit would be the first slot PCIe 4.0 capability. Thanks to TechPowerUp we know that single GPUs are at the point that 3.0 x8 is at the limit of bottleneck with the fastest card, namely the RTX 2080Ti, and as X370 boards are limited to either x16/x0 or x8/x8, having the first slot run at 4.0 x8, equivalent of 3.0 x16, will extend the life of your system. AMD claims Navi will perform between the RTX 2070 and 2080, so their Vega replacement next year should best the RTX 2080 Ti.
I have a GTX 1060 and I use a 3840x2160 IPS panel, makes sure my video card earns its keep
The problem is that PCIe 4.0 is being added late in AM4s life span. By the time GPUs saturate PCIe 3.0 16x, and you would need to have PCI 4.0 16X, AM4 will likely be replaced by the next socket. So sure, you'll have better futureproofing for GPUs with a X570, but likely only one more generation on the CPU front. So buying a X570 board gets me a little extra PCIe bandwidth (which I won't need unless I buy the highest high end GPU next generation...maybe), and no better upgrade path at the CPU level. Not really a compelling reason to upgrade.
Couple that with the fact that PCIe 5.0 is basically ready to go, one may as well stick with the current setup and wait for the post AM4 motherboards in 2021 that will also support PCIe 5.0.
https://www.techspot.com/news/78355-pcie-50-ready-before-pcie-40-can-launch.html
You're forgetting one thing: DDR5. DDR5 should be incorporated into what will likely be the Socket AM5 socket and Ryzen 4000 series, along with PCIe 5.0, so an upgrade to that will entail another $200+ RAM purchase. As it stands with the Ryzen 3000 series enabling your existing board to support 1 PCIe 4.0 slot does futureproof the GPU, at least until cards are twice the speed of the 2080 Ti which isn't happening for quite a while. So really anyone with a Socket AM4 system and a board which supports the 3000 series should upgrade to them if they're planning on going long.
PCI Express 5 and DDR5 are still a ways off. Maybe when 7nm firms up more next year there might be a move by Intel and AMD in that direction. Yields are still below what manufactures can come up with and still make a reasonable profit.
The GPUs having to be double the speed would only be true if that is the only thing that changes. Already, vendors are using up the PCIe 4.0 lanes with 802.11ax wifi, and 5Gigabit, 10gigabit LAN controllers (on the X570 boards anyway).
"So really anyone with a Socket AM4 system and a board which supports the 3000 series should upgrade to them if they're planning on going long."
I had forgotten about this part. So really, X470 users could easily get PCIe 4.0 functionality just by dropping in a 3000 series. That's actually a pretty solid reason, the extra performance on the 3000s is okay, but extra PCIe lanes can help extend that GPU life as well. It does seem like X370 users are left out in the cold a little bit.
I have a feeling most X370 boards will get it, especially the top end ones like the Crosshair VI Hero, as ASUS has already been releasing BIOSs containing the new AGESA with support for "next-gen processors".
So does simply enabling support for the CPU automatically enable the PCIe 4.0 data rate on the first PCIe slot? Or is that something that has to be enabled in BIOS separately?
In Gigabyte's BIOSs for the 400 and 300 series boards with the new BIOS there is an option to set PCIe speed to Gen 4. I suspect ASUS and the other manufacturers will follow suit.
so far nothing from MSI for my motherboard
ASRock has announced support. https://www.asrock.com/news/index.asp?iD=4238, even back through the X370 series.
But so far, the update for my UEFI hasn't shown up on the download page.
Well, so much for that.
AMD Nixes Support for PCIe 4.0 on Older Socket AM4 Motherboards, Here's Why
Guess I won't be upgrading this generation.
Makes sense. Need to isolate the electronic circuitry more due to high power requirements, otherwise, I am guessing, interference issues. I was surprised at the initial talk that they could make it 4.0 compatible.
Sure. But it does lessen the appeal of doing a 3000 series upgrade on a existing board. A 15% IPC increase is okay, but I'd need to buy a whole new motherboard to access PCIe 4.0. A 3000 series upgrade was more compelling with an IPC increase and support for the higher signaling rate.
Do any current GPUs take advantage of PCIe 4.0? If not, it's probably not till next year that a GPU will be made that can take advantage of it. Then probably only the very high end GPUs, like > 2080 ti, will demonstrate a real performance difference. When I have looked at 2.0 vs 3.0 performance in the past, there wasn't much of a difference in a lot of games, but that was using previous gen GPUs to test, not the 2080 TI.
I just don't see PCIe 4.0 that big of deal unless you have or plan on getting something like a 2080 TI in next couple of years.
Right. But there was some upgrade path potential there. If the 3000 series added PCIe 4.0 support to older X370 and X470 motherboards with a CPU upgrade, there would be no reason to purchase a new motherboard when higher end GPUs come out.
Navi launches with PCIe 4.0 support this year (of course, it won't saturate PCIe 3.0). Now, when GPUs advance to needing PCIe 4.0 (2021?), AMD will likely have their new socket out, which may also require DDR5. So taking advantage of that GPU upgrade will be far more expensive.
Whereas, if you could just buy a new GPU and drop it in to your old X370/X470 AM4 board with PCIe 4.0, that would likely be good enough until the CPU wasn't up to snuff anymore.
Same here, no reason to upgrade to a new processor for $330-$400 if all you get is 15% faster performance, especially considering this is the last chip for Socket AM4. Basically AMD just gave every owner of a 300 and 400 series board NOT to buy a 3000 series chip, since buying a new chip now just to turn around next year and release DDR5 and PCIe 5.0...
Seems that MSI has not yet provided a BIOS update for the new processors.
Guess my B350M Bazooka is to be left twisting in the wind......
When I saw the figures in the table above, or, rather, when I saw those for the Ryzen 5 parts in addition to the Ryzen 7 and Ryzen 9 parts, I had a question. If the Ryzen 5 3600X requires 95W TDP, how on Earth can the Ryzen 9 3900X, with twice as many cores, and running slightly faster, manage on only 105 watts?
That's the only thing that seemed too good to be true, instead of just really good.
It's like the difference between the 3700X and 3800X, 40w more TDP for a 300/100mhz B/T clock increase. Will have to wait for the third party benchmarks to see if they pan out, since even in overclocking terms that's a heck of a power increase for such a small performance increase. It is possible, however, that the chips are configured differently so that they may consume about the same power, but the 3800X is allowed more power to maintain its turbo frequency longer.
The similarity is likely because the chiplet design AMD chose to employ with the Ryzen 3000 series. Effectively, the I/O, and DRAM controllers have been split from the CPU cores and L3 cache and now exist as a separate 14nm chip. This is the portion of the CPU that likely generates most of the heat, and since it is exactly the same on all the CPUs, the thermals produced are similar. The 7nm portion that contains the actual cores and cache will still add some heat, just not that much.
Historically AMD has rated their TDP of their CPUs as a worst-case scenario vs Intel's "average" (read: nowhere near correct) number. It's also very possible these are typos. With more info coming out at E3 in a week we should find out if this is the case.
I am very favorably impressed by the new Ryzen processors, and I am planning to use one in my next system. One of the major new features that made my decision was that the amount of floating-point muscle per core is going to be doubled in this generation of Ryzen. However, because the competition has also been doubling their floating-point muscle per core, for ever-wider vector extensions to the instruction set, apparently they still have more.
So one thing I'd like to suggest to AMD is that they consider taking one good idea from Bulldozer: the ability to combine the floating-point units of two cores to give one core more floating-point power. That is a good idea, and not a bad one, if it isn't used as an excuse not to provide each core with enough floating-point power.
On July 7, AMD will assume the mantle of technology leadership with its new Ryzen chips that have 15% more IPC than the previous generation. But now the competition is saying that this fall, their new 10nm chips will have 18% more IPC than their old ones. If their claims can be taken at face value, that means you'll be almost 3% farther behind. Two months of technology leadership is not enough.
I'd like to make another suggestion for product improvement to AMD. I think there is a way that AMD could, with existing technology, improve the performance of its microprocessors so much that the competition would be left in the dust for... well, close to a year, maybe.
Perhaps it ought to be called 3D Now! II so as to help ensure the competition won't take it seriously until it's too late.
The patents on the Cray I have expired by now. Today's microprocessors do perform calculations on vectors, but they use an approach that derives from the SAGE air-defense system, the AN/FSQ-32, or the TX-0 - splitting a long word into smaller parts. The Cray I used a different approach to vectors, involving vector registers that contained up to 64 double-precision floating-point numbers.
That approach has become unfashionable of late; it does require quite a bit of memory bandwidth. However, there's still one supercomputer product that uses it, the SX-Aurora TSUBASA from NEC, so it's not infeasible with today's technology.
And AMD has expertise in connecting multiple dies to a substrate inexpensively, as shown by the new Ryzen processors among other AMD products.
The NEC product I mentioned only manages about half the FLOPS of a graphics processor. But except for the restriction that the operations have to be applied to vectors to get that performance, the other limitations of GPU computing don't exist. So this approach produces usable floating-point power.
You want a way to blow away the competition? This is it.