cancel
Showing results for 
Search instead for 
Did you mean: 

General Discussions

blackmaninc
Adept II

When will PCI4 Motherboards be available?

Because I can't wait to get rid of this god damn computer and buy a brand new one. I really would like to buy it along with the new Radeon 7 to rid myself of this nightmare asap.

0 Likes
11 Replies

If you really get this problem then please you can go forward for this to get rid from this.

0 Likes
ajlueke
Grandmaster

Hello,

There was indication from AMD that pcie4 will not be locked out on current X370 and X470 motherboards, and they can get PCIE4 functionality with a BIOS update.  It will be up to the motherboard makes to implement that.

But what about that X570 motherboard? Did they say when they were planning on releasing it?

0 Likes

Strictly speaking Zen(+) is also capable of supporting PCI Express 4.0., as PCI Express is not the Native AMD Bus.

Ryzen as a note Natively Supports 2400 / 2666 / 2933 / 3200MHz... this means 19.2 / 21.3 / 23.5 / 25.6GB/s per Direction., with either 2 (Ryzen), 4 (Threadripper) or 8 (Epyc) Directions; although strictly speaking keep in mind that they operate in Pairs.

Thus you can use it in R/W Mode (i.e. 2666MHz = 42.6GB/s) or R+W Mode (i.e. 2666MHz = 21.3GB/s Write + 21.3GB/s Read).

Both Infinity Fabric (HXT) and PCI Express (PCIe) are technically speaking Serialised Buses., but Infinity Fabric uses Clock Synchronised Stream as opposed to Prioritisation Request & Response.

In essence it's better to think of HyperTransport / Infinity Fabric more like the DDR Memory Bus (which for all intended purposes, it is) while remaining Pin-Compatible with PCI Express., which is instead based upon the Router-Endpoint Serial Bus (i.e. USB, Networking, etc.).

Now strictly speaking when we're talking about a Low Population (i.e. Best Case for PCI Express) Bus., then it is capable of ~8-10% Better Bandwidth on an Individual Device Basis... however when we account for Root Complex End-Point Switching; with say a Keyboard (USB), Mouse (USB), Graphics (PCI-NB), Audio (PCIe), HDD/SSD (SATA/M.2 PCIe) … well this "Advantage" actually completely disappears.

Keep in mind that each Switch results in Latency, as well as the need for the PCI Express Controller to Manage the Data Flow... where-as because HXT is essentially performance a R+W within the same Cycle, this means that regardless of the number of Devices connected (to the Maximum Lane Utilisation) there are no Bottlenecks to the Data Stream.

Still Synchronisation does have the downside that the System as a whole is bound by the Peak of the Slowest Component (be that Memory, Processor, Chipset) … they all basically have to be in Harmony … where-as the Intel approach means each system can operate at the Peak it allows and there are various "Management" Processors that handle the Work Group Queues that occur between them.

Oh and remember Zen was designed from the outset with the assumption that Intel were going to release PCIe 4.0 in 2017.

After all the Final Specification was released in Early-2016., and Intel did showcase a working prototype in Late-2016 … not to mention the Licenses were granted to several groups during the same period, including AMD.

As such it's somewhat reasonable to ask... "Where the Heck are the PCIe 4.0 Devices and Platforms?!"

Well for first and foremost., realistically very few Devices actually need the additional bandwidth of PCIe 4.0 … primarily we're talking about High-Performance Networking, Graphics Accelerator Cards and Compute Accelerator Cards.

And well... that's really it., and even then it's not like any of these strictly require it per se.

AMD APUs use Infinity Fabric / HyperTransport instead... I mean why wouldn't they? It's not like they have to be compatible with Intel Chipsets; and they're limited in performance by the System Memory anyway, so it literally makes no sense to License / Use PCI Express for such.

The result being... if nothing is using PCIe 4.0, then what's the point in then having the pay the Per Unit Royalty for something no one is using?

I mean this is the same reason that Microsoft with Windows 10 stopped supporting DVD / BluRay (i.e. Physical Video Media)., instead shifting said support to an Optional Application that they charge a small fee for.

While sure, the Royalty costs are $1.00 / Unit for DVD and $1.28 / Unit for BluRay... keep in mind that Windows 10 is now installed on ~700M Computers.

That's ~$1.6M in Royalties for a Feature basically no one uses because Netflix, Amazon Prime, Hulu, Microsoft Video, iTunes, Google Play, etc. all exist.

And we're not even including the Production Cost for such a feature.

So there's that element... but there's also the aspect that keep in mind the PCI-SIG group is Intel, Dell, HP and IBM.

As a result that are some stipulations in regards to the Licensing where, until they release a Product to Market that utilises a given standard then the Royalty Costs increase., which arguable objectionable... does make sense as of course Intel wants to keep some form of advantage to be able to introduce and retain a period of "Exclusivity" for their own Technologies., and frankly the only reason the PCI-SIG exists in the first place (much like the HyperTransport Consortium, although arguably disbanded now AMD have replaced HXT with Infinity Fabric) is to bypass Anti-Trust Regulations.

What I mean is... if Intel simply kept the Technology for themselves (exclusively) then BECAUSE it's a ubiquitous Standard (i.e. it's the only option, due to Industry Adoption being > 70%) then this means Intel could be liable for Anti-Trust Practises.

By having it as a "Open Standard" (via PCI-SIG), they get around this by saying that "Well, we're not stopping anyone from using it... nor are we unfairly charging when there is Competition" … so, yeah in that regard it then ends up on AMD if they wait for Intel to introduce such making the cost-to-support nominal; or be first-to-market but have that incur a notable increase in their Product Costs to where they become potentially uncompetitively priced or have very marginal profitability.

And again if NO ONE is actually supporting it... then it's just a pointless additional cost on the "Potential" future utilisation.

They'd have had to have made the RX 500 Series and RX Vega Series PCIe 4.0 Supportive... JUST to have said support make sense, and encourage NVIDIA to follow suit; but again that's an additional cost for being "First-to-Market"; which eh, isn't a good place to be when you can't exactly just throw Billions at a problem in the hope of triggering wide-spread adoption; and not even for your own Technology.

Well there's that, and Intel does have a habit of being a bit of a dick... and pushing out a Point-Revision with just enough incompatibility, as to make said specification obsolete. (see: SSE 4.0, 4.0a and 4.1)

Now it does seem like AMD are doing to at least tentatively support PCI Express 4.0 … although the rumour at present is that only a Single x16 Slot will support it; regardless if we're talking Ryzen 3rd Gen (500-Series Chipset) or BIOS Enabled Support for the 300/400-Series Chipset; and AMD will also only be supporting it on the Zen2 Processors; again even though Zen(+) could support it; likely to avoid having to pay Retroactive Royalties.

Personally speaking... I don't think AMD should be supporting PCI Express 4.0., and instead should be using their influence to introduce a successor to HXT; which was very short-lived and a failure in the Server Space.

If they produced an "Infinity Link" (can actually think of a pretty cool logo for it)… as a Pin Compatible replacement for PCI Express, USB, M.2., with them making partnerships to specifically offer a full "Range" of Products that support it (along side the current standard compatibility); with the exception being that the Bandwidth / Performance could be improved via Infinity Link.

Well, I mean it would put them in a very strong position to actually have it adopted.

Look at USB... with 3.0, 3.1, 3.2 … not to mention the Type-A, Type-B and Type-C (which Type-C is indiscernible from Thunderbolt)., like it's slowly returning to the same mess the previous RS Ports were (which USB was introduced to resolve).

AMD has the momentum right now., maybe not the Market Dominance; but the Momentum.

You couple that with the current Consumer market having positive reactions to more Consumer-Friendly approaches (akin to what AMD used to be renown for) and quite negative towards the Traditional Deliberate Segmentation / Obsolesce practises that Intel, Google, Apple, NVIDIA, etc. are engaging in … well they have a fantastic opportunity to just be themselves and gain a VERY secure foothold on dictating the terms / standards going forward.

And that's the Position that AMD absolute MUST be in... because consider how many times has their Failure not been due to the Products they've Produced but rather because of the Politics of the Industry that they were never really able to have anything more than a Token Seat at the Table for?

I mean use the Enthusiast Consumers., to essentially drive through their need to always have bragging rights (Cost be Damned provided they can claim they have "The Best") … instead of the Core or Server Consumers who somewhat want Good Value, High Productivity and Battle-Tested Technology.

All those years of having to operate and survive by Margins as opposed to throwing money at a problem... well use that to force the Competition into a position where it simply isn't comfortable competing because they'd loose too much stock value.

That's how Ryzen "Changed the Game" as it were. So expand that idea.

At least that's how I feel about the subject.

0 Likes

PCIe 4.0 will be introduced with Zen 2 and 500-series chipsets this year, and possibly enabled on 300 and 400 series as well for at least one slot by motherboard manufacturers, but PCIe 5.0 is already ready and has products coming to market late this year, so it is entirely likely Zen 3 in 2020 will offer PCIe 5.0 functionality. The good news is that for home users, PCIe 3.0 is still more than sufficient for anything you need it for, so you're not missing out on anything.

https://www.tomshardware.com/news/pcie-4.0-5.0-pci-sig-specification,38460.html

0 Likes

The existing limited bandwidth of PCIe was primarily responsible for MultiGPU falling out of favor.  Quite a few rendering techniques have inter-frame dependencies, that all portions of the previous frame to be copied to the next.  This makes the rendering far more efficient and greatly increases FPS, as portions of the previous frame can be copied from the cache without having to re-render.  With MultiGPU, the data from the previous frame is on a separate GPU, so the info either has to be copied between GPUs or rendered over again by the second GPU.  PCIe bandwidth limited the amount of data that could be copied between GPUs without adding latency, so it wasn't a much better option than just re-rendering.  Both options really killed the scaling users saw in comparison to a single GPU.  It really is now to the point where most developers don't bother with MultiGPU support at all.

While NVLink does massively increase GPU/GPU bandwidth, most developers won't build out an engine assuming a user will have support for a limited use proprietary standard.  With PCIe 5.0 and the increase of the interconnect bandwidth being available with most motherboards, we may again see MultiGPU being built out into more gaming engines.

0 Likes

No, the primary reason multiple GPUs fell out of favor was because the issues relating to using multiple GPUs (microstuttering, non linear scaling resulting from anywhere between 0-100% scaling, etc...), with the secondary reason being cards like the GTX 1080Ti are able to push high frame rates at ultra high resolutions, and mid range cards capable of high frame rates at the most popular resolution, 1920x1080. A tertiary reason could also be that since many PC games are either console ports or co-developed with the console version, and consoles only have a single GPU, it doesn't justify the extra effort for multiple GPU programming. Also, with DirectX 12 and Vulkan, developers are solely responsible for multiple GPU development, which is why you are seeing more games lack multiple GPU support. It has nothing to do with PCIe bandwidth.

0 Likes

As I addressed, the no linear scaling is due in large part to the limitations of the PCI bus.  As rendering methods get more sophisticated, inter frame dependencies have grown in game engines.  That is great for making games run better on a single GPU, as the information from the previous frame that will be utilized in the next can be loaded back out of VRAM or the cache, leaving the GPU free to render the "new" parts of the frame.

Those methods are bad for MultiGPU setups though, as the data from the previous frame is stored on a separate GPU.  You then either have to copy it (over PCIe) from one GPU to the other or just re-render the entire frame.  Either way, you get non linear scaling as the MultiGPU setup involves a slow copy step, or rework not present in a single GPU setup.  You can get rid of inter frame dependencies in the game engine, and eliminate the performance hit to MultiGPU in the process.  But doing so will reduce performance on single GPU systems, as all information in every frame is always re-rendered.  As games typically are developed on consoles, inter frame dependencies are favorable to squeeze all the frames you can out of the lower end hardware.  

There simply isn't enough PCIe bandwidth to copy the amount of data from one GPU to another without introducing serious latency (stuttering).  And re-rendering all the data on the second GPU would vastly decrease the scaling.  So, it really isn't worth building out MultiGPU at all, because the benefits will be negligible.  You can use methods like frame pipelining to improve results (A good primer on all that is found here https://developer.nvidia.com/explicit-multi-gpu-programming-directx-12).  But another solution is to vastly increase the interconnect bandwidth between GPUs.  That way, frame dependencies can be left intact, and copied without introducing latency and restoring scaling.  That is what NVLink is really designed to do, improve that interconnect speed so developers will decide to incorporate it.

So, the PCIe bandwidth is the reason MultiGPU just doesn't work like it use to, especially when employing traditional AFR.  The need for more bandwidth is secondary to developers decisions to employ rendering engines utilizing frame dependencies.  Given the performance improvement those rendering techniques confer on single GPU setups, it is unlikely that they will go away. 

0 Likes

Would PCIe 4.0 or 5.0 increase the max wattage of 75 watts per lane or that doesn't matter in this case.

0 Likes

All I've read about PCIe 4 and 5 has to do with speed increases, namely for the enterprise level for network interconnections. I suspect since PCIe is intended to be backwards compatible they wouldn't remove the 75w limit, especially since GPUs are the only thing that I can think of which suck down more than 75w.

I do remember reading some article a while back on TomsHardware about PCIe 4.0 allowing up to 400w per slot, but that was incorrect and was corrected. I believe that dual 8 pin (300w) GPUs will be considered good in the official PCIe 4.0 spec, and is probably where the confusion came from. I'm not sure how much I'd trust thin PCB traces and wires to be able to carry a kilowatt of power without shorting out.

PCI EXPRESS 5.0 DRAFT 0.9 IS RATIFIED | HARDCORE GAMES™ 

I posted that some time ago when I saw something on the newswires. AMD is known to be offering PCI Express 4 with the upcoming next series of processors.

I am not sure yet what Intel is going to do as there has not been much leaking out of their speculators yet.

PCI Express 4 will not help graphics cards but motherboards will benefit the most.

0 Likes