I'm trying out SEV on an EPYC 7282. Everything seems great until I try network operations. I've noticed that KVM guests running with SEV were experiencing significantly reduced network performance. I then added <driver iommu="on"/> to a guest not running with SEV and saw the same problem. I set up a large data transfer from each guest and noted that IOMMU was causing the guest to transfer at a rate 10% of a guest configured without it.
I've tried tinkering with IOMMU-related boot parameters to no avail. I'd simply ignore it and not use IOMMU, except that it is required for virtio devices running with SEV.
Any hints would be welcome. Here is the NIC configuration:
<source dev="ens4f0" mode="bridge"/>
<address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Solved! Go to Solution.
We can wrap this up. I moved one of my guests to an Intel box and enabled iommu there. I'm seeing the same performance issue on Intel. I'm not seeing it on s390, though. So, this must be a Ubuntu/QEMU/x86_64 thing.
Also, have you consulted the Linux® Network Tuning Guide for AMD EPYC™ 7002 Series Processor Based Servers? It's written around bare metal performance, but can still be a starting point.
Yes, I have consulted that document, as well as some other sources. I've tried iommu=pt but saw no change.
OK. We will have to take a closer look at get back to you. I am not sure if we have that NIC available though if we need to reproduce the environment.
Some additional info if you're going to reproduce it (many thanks, btw). I used the following as a staring point:
I used the following command line create the guest domain:
sudo virt-install --name sev-guest --memory 4096 --memtune hard_limit=4563402 --boot uefi --disk /var/lib/libvirt/images/sev-guest.img,device=disk,bus=scsi --disk /var/lib/libvirt/images/sev-guest-cloud-config.iso,device=cdrom --os-type linux --os-variant ubuntu20.04 --import --controller type=scsi,model=virtio-scsi,driver.iommu=on --controller type=virtio-serial,driver.iommu=on --network network=default,model=virtio,driver.iommu=on --memballoon driver.iommu=on --graphics none --launchSecurity sev
The ISO simply changed the default password. I also tried replacing the hard limit with locked backing memory. No change.
To observe the transfer speeds, if you're going to try to reproduce this, I installed nginx. I used wget from another system to transfer a collection of 1 GB files.
The network adapter is not mentioned in the manual so its likely not tested by Lenovo.
I have some PCIe SPF+ cards in my shop which support dual ports. They seem to work on everybody's machine I have tried the cards in so I cannot see why the Intel device would be any problem.
I know that I was noticing problems with Hyper-V my AMD desktop until a new BIOS fixed the problem. I so I am wondering if there is a fault in the Lenovo BIOS as AMD manages the BIOS rather than the board makers.
As a suggestion see if Hyper-V runs OK on the machine. This will narrow down the problem of operating system vs hardware.
Given you are not using all the memory slots, I have noted that performance is better with all slots filled with RAM when doing demanding workloads.
The manual says you have 16 memory slots in today. You have a few 32GB DIMMS on the board. The machine supports 128GB DIMMs but those are butally expensive.
I have a lot of experience with computer chess which can use all of the RAM you imagine and run banks of servers into the dirt. Many chess enthusiasts with 4 CPU boards and a ton of RAM have lots of comments on what works and what does not.