Not sure how to use that many lanes as even network cards can make do with 4-8 lanes
ssd arrays may need a few more lanes
seems like overkill to me
Well datacentres will be replacing spinning rust with SSDs. Let's say you have 12 SSDs in a rack module, that works out to a lot of bandwidth. You will need server boards with several full-speed PCIe slots. Your Intel CPUs with 48 lanes will not be close to adequate.
My R5 2400G is a tad short on PCIe lanes
Agreed, all those lanes are processed serially due to medieval CPU design limitations (x86/64) hence what's the point?
Have 5 or so usb3.x devices transferring data concurrently and you see where the problem really lies - the transfer speed is divided by 5!
Well, there's the possibility of being able to have 5 PCIe x16 electrical slots (80 lanes), then the other 80 lanes divided out however the motherboard designers wish, be it USB, network interfaces, or drives, giving AMD a large advantage in workstations which will be equipped with 5 GPUs when Intel is competing with just 80 total.
Who knows, with as few details as there are about Rome, the CPUs themselves could use 32 PCIe lanes to communicate with each other, when you have 8 core chiplets and 64 cores per CPU, there's going to need to be -a lot- of crosstalk.
Retrieving data ...