I was testing my NVME hard drives separately and simultaneously to see why my NVME raid setup was not giving me expected results.
In this test case, I tried to start them all at as close to the same time as possible:
These are 4 Samsung 960 EVO on an Asus Hyper M.2 16x card. Each drive does about the same when testing them individually NOT at the same time:
I bought the Threadripper because of the 64 dedicated PCIE lanes to the processor. It seems like there is a bottleneck making these lanes feel less than dedicated. The 4KiB Q32T1 is especially suspect, as it seems like the numbers kinda add up, despite this being 4 different processes doing single threaded tests on a 16 core processor.
I also did a 3 way simultaneous test on other M.2 slots on my board besides the Asus Hyper M.2 16x:
Again, it seems like there is only so much IO capability to go around, and it is being shared among the simultaneous tests.
Here are some of the RAID tests that got me investigating this:
AMD Raid0 7 disks no cache:
AMD Raid0 3 disks no cache:
AMD Raid0 4 disks no cache:
On the RAID tangent, I don't seem to be getting anywhere close to the kind of results shown in material like this which prompted me to dig deeper (though I am using CrystalDiskMark instead of IOMeter):
The RAID numbers aside, my expectation was that I should see these individual drives performing just as well simultaneously as separately, considering I have 16 cores and 64 CPU PCIE lanes. Why is my expectation invalid? Is this a bottleneck in the Infinity Fabric? It seems weird that it can push such high sequential numbers, yet the 4K tests are fighting each other for resources.
I think I may have to give up on AMD raid for now anyways because of other issues (Installed 17.50 NVME Raid drivers for Threadripper, now computer cannot power down normally ), but considering how much time I sunk into it (It took me several hours to get RaidXpert online), I wanted to get my results out there.