cancel
Showing results for 
Search instead for 
Did you mean: 

Server Processors

henry3058
Journeyman III

What is the Maxium number of NVME drives supported on both 1s & 2s EYPC

Hi Everyone I am looking for some help on the following. have called the AMD Tech line and spoke with several engineers but I am unable to find the answer to the following question.

How Many NVME PCI-E devices are Supported by the EPYC 1st & 2nd gen chipset and if you have 2 sockets does that double the supported drive QTY.

I understand the initial limitation would be 32 Drives when running in 4x because of Pci-e lane limitation of the CPU but these devices would run through an aggregator so let just assume they only use 1x lane each.

I was able to find the following slide listing 32 devices ont he 2nd gen EPYC as well but it also doesn't clarify if this is per socket or what it's using for reference.
Review: AMD Epyc 7742 2P Rome Server - CPU - HEXUS.net 

0 Likes
3 Replies
Anonymous
Not applicable

Hello henry3058‌,

Please see this previous posting for a description of what's possible:  https://community.amd.com/thread/223345 

As that post states, each of the x16 ports can be bifurcated all the way down to x1, but there's a maximum of 8 end points.  So you can do eight x1 devices, but no more.  This is independent of if the platform is 1 socket or 2 socket.

Now to your specific question: how many NVMe devices can the processor and platform support: that's more up to the motherboard and how they have laid out the PCIe subsystem.  Theoretically, with 128 lanes you can have up to 32 NVMe's connected via x4 (typical for a U.2 NVMe device found in servers), but now you have no spare lanes for a BMC or network device.  With a 2 socket platform there is the option for the motherboard vendor to provide an additional 32 lanes by cutting down on the processor interconnect by using only 3 of the 4 Infinity Fabric links.  If you find a motherboard that does this, now you have PCIe lanes to connect the additional infrastructure.

Hope this helps.

0 Likes
henry3058
Journeyman III

Hey mbaker_amd, That information is very helpful and leads to to a few more questions.

The information given is for the 1st gen EYPC which does have 4 dies with 2 x 16x pcie 3.0 lanes per die for a total of 128.

The 2nd gen EYPC's has 8 dies but the I/O is handled via a central chiplet. Do you know if the 8 drive maximum per lane is carried over to the Pci-e 4.0 per 16x lanes? If the drive maximum is 32 or 64 and handled VIA an I/O chip does it care how many lanes the device is running as long as the maximum drive qty isn't reached for the system? 

On this 64 drive limit, would this be per CPU. ie if I had a 2 socket server would it be 128 drives?

I understand that a lot comes down to the motherboard and hardware. My question would only apply if everything else was available to make this configuration work. Thanks for the help.

0 Likes
Anonymous
Not applicable

Hi again henry3058‌,

You are correct that all IO is dependent solely on the IO die.  That IO die has the eight x16 root complex's, each of which has to follow the maximum endpoint rule (up to 8 endpoints even if all are bifurcated down to x1).  And remember: four of those eight connections turn into Infinity Fabric for a 2 socket platform, so there's always only 128 lanes (with the exception I mentioned previously - some motherboard will connect with only three of the links resulting in an extra two x16 connections, or 32 lanes, available).  

So with eight endpoints maximum per x16 connection in a 1 socket, and eight x16 connections you can have at most 64 PCIe endpoints, or PCIe devices.  On a dual socket, that number is typically the same (can grow to 80 endpoints with those additional 32 lanes).  Remember though: these are server processors, and will have a BMC connected taking at least a single PCIe lane (and one endpoint). 

I hope that clears things up.

0 Likes