cancel
Showing results for 
Search instead for 
Did you mean: 

Processors

martareis
Adept I

Ryzen 9 5900X Configuration

Hi, I need your help to confirm the system compatibility. The workstation will be setup for video editing.

I have doubts regarding memory compability and pci lanes.

Configuration:

AMD Ryzen 9 5900X 3.7 GHz

ASUS PROART X570-CREATOR WIFI

Corsair Force Series Gen4 MP600 NVMe M.2 2TB SSD (boot disk)

Samsung 1TB 860 EVO (cache files disk)

G.Skill Ripjaws V Red DDR4 2133 PC4-17000 4x16GB - Upgrade in the future to G.Skill Trident Z Neo DDR4 3600 PC4-28800 64GB 4x16GB CL16 or higher

LSI Megaraid 9261-8i +  6x 6TB RED NAS WD60EFRX - Raid 5 or 10 (Media Files Disk)

2x GTX 1080 Founders  (I'll try to upgrade gpu in the future to get the maximum performance out of the processor)

I'm purchasing new componentes (cpu, motherboard, nvme disk) to upgrade my workstation but I'd like to mantain the memory some of them for now  (ssd, raid componentes and gpu) to cut costs.

 

I appreciate your help.

 

Regards,

Marta

0 Likes
7 Replies

You can check to see which RAM MEMORY Part numbers are compatible with your Asus Motherboard from here: https://www.asus.com/Motherboards-Components/Motherboards/ProArt/ProArt-X570-CREATOR-WIFI/HelpDesk_Q...

You can also check other Hardware devices such as SSD/HDDs for your motherboard at the same link.

The good news is that you don't need to upgrade the BIOS version since the Ryzen 5900X should work out of the box with your motherboard's BIOS version installed.

Here are the Specs for your Asus Motherboard: https://www.asus.com/Motherboards-Components/Motherboards/ProArt/ProArt-X570-CREATOR-WIFI/techspec/

Screenshot 2021-09-16 103642.png

0 Likes

I saw all that prior to post the thread. Is not that simple to understand.

Memory running at 2133Mhz may not be compatible and I own those  

The diagram for pcie bandwith and bifurcation left me doubts.

 

configuration:

Dual GPU 2x pcie 3.0 (x16 x16) seems impossible:

alternative could be single gpu pcie 3.0 (x16) or dual gpu pcie 3.0 (x8 x8) ; 1x nvme pcie 4.0 (x16) (I'll use only one m2 slot); 

1x pcie 2.0 (x8) raid controller (but I guess it's not achieveble either)

 

1x pcie 4.0 (x16); 1x nvme m2_1 pcie 4.0 (x16); 1x pcie 2.0 (x8) raid controller (seems also impossible) 

 

Thx

 

 

 

 

 

0 Likes

Yeah PCIe lane distribution can be very confusing.  PCIe 4.0 slots are backwards compatible, they can drop down to 3.0 or 2.0 as necessary.

So, it sounds like your setup should be somewhat workable:

Your 1080's are PCIe 3.0, not 4.0.  The mobo has 2 x16 PCIE 4.0 slots off the CPU, however Ryzen 3000 and 5000 non G CPU's only have a max of 24 PCIe 4.0 lanes total (16 for graphics, 4 for NVMe, and 4 to X570 chipset).  So while you cannot put 2 PCIe 4.0 video cards into the first two PCIe slots and have them both work at x16 (they will drop to x8), your video cards are only PCIe 3.0 which are half the bandwidth of 4.0.  In theory, 32 lanes of PCIe 3.0 should be the same bandwidth as 16 lanes of 4.0.

So - best case for your video cards, you can run both at full 3.0 x16 speeds assuming mobo allows this.  Worst case is that they will drop down to x8.

Your NVMe drive goes into the first NVMe slot - this has a direct connection to CPU so doesn't share bandwidth with any other device.

You can put your RAID card into the third PCIe slot, but it looks like that slot is limited to x4 electrically.  So your RAID card will work, but only run at x4 (the slot will be missing electrical contacts that allow for higher speeds).

If running the RAID card at x4 is too slow for you, you'll probably need to buy a different RAID card that can support PCIe 3.0 or 4.0, or ditch Ryzen altogether and go for a Threadripper config.

 

My old xeon e5 1650v3 supports 40 lanes.

Ryzen 9 5000 series cuts it to 24 lanes, only. 

0 Likes

That's because Ryzen's are desktop CPU's and (midrange and up) Xeon's are workstation to server class CPU's.

If you need more than 24 PCIe lanes, then you'll need to go to Threadripper.

And if you're trying to spend as little money as possible, you will be making a lot of compromises.

 

 

 

Gwillakers
Challenger

I'm not going to get into the PCI-slot compatibility question.  There are others that I believe have addressed that.  My initial reaction is that you won't have a problem.  However know that I did not look at it in-depth.

Let me address though, your choice of Processor, NVMe and  Memory.

You are choosing the 5900x.   For about $250 more, I believe you may be better served with the 5950X.     It is not for the additional 4 cores and 8 threads, but because I believe in general the silicon is better in the 5950X.  All dies are meant to be 8 functioning cores. It is the Dies made from silicon where all the cores did not make the grade, that were relegated to the 5900X pool.  The 5900X have two dies, where it was necessary to disable 2 cores apiece.  Before making my purchase, I scoured the Internet for users and their experience with their chips.  Generally I found that 5950x users had cooler chips, using less voltage.

You may also note, that in forums like this people are having more trouble with the 5900X and WHEA errors.   I believe that most of those errors though, are more a fault of the users unrealistic overclocking expectations, than a fault of the 5900x.

If funds are the issue, I would suggest that you downsize your OS NVMe drive.   Downsizing from 2TB's to 1TB will save you half the expense, and selecting Generation 3 over Gen 4 will save you half again.  You seem to have plenty of Storage space. Do you really need it all on your OS drive?    Also, it is easy to get speeds of 2500MB/sec on Gen 3.  So at what price are you paying to transfer 5GB in one second instead of 2?     Think of selecting your NVMe drive as a temporary option, one that will save you $300 and let you get a better processor.    If there is one thing that falls faster than the price of processors, it is the price of storage.

Lastly,  I can not let an opportunity like this go by without making a pitch for Error Correcting Memory.  Error correcting memory has been with us for well over 50 years.  It doesn't cost that much more, and it is not much slower than Non-ECC.  In real life you would not notice the difference in speed.   You are lucky, you haven't invested in that cheap stuff yet.  However you will have all the supporting cast.    The Ryzen Processor supports ECC.   The X570 Chipset supports ECC, and even your Motherboard supports ECC.  Don't you deserve to have your Processor check each 72-byte line fetched from the memory subsystem for integrity?   Just look at the people in these forums spending Hours, Days, Weeks and even Months searching for some solution due to data corruption.    ECC makes all that so much easier.  It warns you of errors that you would have never noticed (check Event-Log).  It corrects 1-bit errors.   Unknown to most, is that ECC memory can be overclocked.   The best thing about overclocking ECC, is that it will inform you (in the Event-Log) when you have pushed it too far.   People who use Non-ECC can only guess.   Maybe they pushed it too far, and will experience a crash later,  or maybe they could have pushed it further, and they are leaving performance on the table.   They will never know.  All they know is that the bytes grabbed for those few hours of MemTest64 were ok, for now.  Overclocking Non-ECC to me, seems as sensible as removing brakes from race cars expecting the reduced weight will make them go faster.

I get annoyed when I see Third party DIMM vendors,  such as Thri-dent or X-Skill put dies on their boards spec'd  2166MHz @ 1.2V, and then expect the user to pump them up to 4000MHz with 1.4V !!!

Go Ahead, for giggles... bring up CPU-Z,  open the SPD Tab, and read out the greatest JEDEC specification for your sticks. Feel taken? 

My Kingston KSM32ED8/16ME  or KSM26ED8/16ME  have always run at least two bins higher than what they are rated for, with the application of NO additional voltage.  Also the JEDEC of their dies have always been what they are rated for externally on the package.

A lot of people do not realize that the Infinity fabric clock is tied to the DIMM speed.  It can be un-coupled though. The maximum speed to run the internal memory controller is 3200MHz.  So running your Memory at it's advertised speed effectively overclocks your processor and voids your warrantee.   Some people would have you run the memory at 3200MHz and then tighten the timings (Eg.  run 3200MHz at CL16 instead of CL22).    You could do that, and it would reduce the time the memory subsystem takes.   However, You Know what?   It is still an Overclock.  You see The Timings and Voltage are part of the JEDEC specification as much as the speed.

Some things never change.    Remember the Game "Hide and Go Seek"?   Some kids would try to cheat and count really fast, one, two, three,......ten!     Others would try to cheat and only count fewer numbers. One, two, three, four, five, six!.    Either way, they didn't give their counter part enough time to store themselves away.    No cheating in life.

 

 

 

Spoiler
well, Im not interested in overclocking... at all. 

I appreciate your thoughts and recommendations for the system build. 

Transfer speed is import if you consider I'll be using the system for video editing 6k. 

Downgrade the system disk to pcie 3.0  to get a better and reliable cpu seems fair. More cores are welcome but what I need is stability and reliability. 

I was trying to keep my current memory (ripjaws 2133mhz)  to cut costs but you're right regarding the advantages of ECC memory.  Let's see if I can reach them. 






0 Likes