cancel
Showing results for 
Search instead for 
Did you mean: 

Server Gurus Discussions

sinanju
Adept I

Can't run multiple SEV guests using SR-IOV?

Trying to get around an IOMMU performance issue documented elsewhere, I am trying to leverage SR-IOV. It appears to work well, generally. For testing purposes I have 10 non-SEV guests and 10 SEV guests, all using SR-IOV on an Intel Ethernet Controller X710 for 10GbE SFP+ (rev 02). I can run multiple non-SEV guests simultaneously, each using a different VF. I can also run a single SEV guest and multiple non-SEV guests simultaneously, also using different VFs.

However, if I have an SEV guest running and I attempt to bring up another also using SR-IOV, the first loses network connectivity and the second never gets it.  I have verified they are using different VFs, have unique IP addresses, and are reporting unique MACs.

I can't find anything that says this is a known limitation.

A more fulsome description of my server can be found in this thread: https://community.amd.com/message/2988472 

0 Likes
2 Replies
mbaker_amd
Staff
Staff

Re: Can't run multiple SEV guests using SR-IOV?

Hello sinanju‌,

We have been busy trying to replicate your scenario.  We utilized Ubuntu 20.04 and a Mellanox ConnectX-4 controller.  We were able to successfully create two VM's, with SEV enabled, and perform both pings and network traffic using netperf.  Below is some logs of how we got this to work (note: the NIC used here does not generate random MAC addresses, so we had to manually configure them).

I enabled SR-IOV on Mellanox NIC and increased the num of VFs to 8 on the same.

 

Here is a list of the Mellanox NIC PF and VF interfaces on the host :

 

# ip a

….

3: enp65s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether ec:0d:9a:34:10:12 brd ff:ff:ff:ff:ff:ff
    inet 10.236.13.44/23 brd 10.236.13.255 scope global dynamic enp65s0f0
       valid_lft 83028sec preferred_lft 83028sec
    inet6 fe80::ee0d:9aff:fe34:1012/64 scope link 
       valid_lft forever preferred_lft forever
4: enp65s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether ec:0d:9a:34:10:13 brd ff:ff:ff:ff:ff:ff
    inet 10.236.107.89/22 brd 10.236.107.255 scope global dynamic enp65s0f1
       valid_lft 81310sec preferred_lft 81310sec
    inet6 fe80::ee0d:9aff:fe34:1013/64 scope link 
       valid_lft forever preferred_lft forever
7: enp65s0f0v2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 6a:01:e9:50:8e:44 brd ff:ff:ff:ff:ff:ff
    inet 10.236.106.194/22 brd 10.236.107.255 scope global dynamic enp65s0f0v2
       valid_lft 83207sec preferred_lft 83207sec
    inet6 fe80::6801:e9ff:fe50:8e44/64 scope link 
       valid_lft forever preferred_lft forever
8: enp65s0f0v3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 52:00:d6:95:50:6a brd ff:ff:ff:ff:ff:ff
    inet 10.236.106.82/22 brd 10.236.107.255 scope global dynamic enp65s0f0v3
       valid_lft 83242sec preferred_lft 83242sec
    inet6 fe80::5000:d6ff:fe95:506a/64 scope link 
       valid_lft forever preferred_lft forever
9: enp65s0f0v4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether e2:4a:5d:96:58:2d brd ff:ff:ff:ff:ff:ff
    inet 10.236.105.95/22 brd 10.236.107.255 scope global dynamic enp65s0f0v4
       valid_lft 83217sec preferred_lft 83217sec
    inet6 fe80::e04a:5dff:fe96:582d/64 scope link 
       valid_lft forever preferred_lft forever
13: enp65s0f0v1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 62:cd:a4:4c:96:4b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::60cd:a4ff:fe4c:964b/64 scope link 
       valid_lft forever preferred_lft forever
 
# amd@amd:~$ lspci | grep -i ether
21:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
41:00.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx]
41:00.1 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx]
41:00.2 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function]
41:00.3 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function]
41:00.4 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function]
41:00.5 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function]
41:00.6 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function]
41:00.7 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function]
41:01.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] (rev ff)
41:01.1 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] (rev ff)

 

Next, launched the 1st SEV guest and did pass-through of VF (pci device 41:00.2) on port#0 of the Mellanox NIC :

 

# launch-qemu.sh -hda ubuntu-18.04.qcow2 -sev -passthru 41:00.2

 

While launching the SEV guest, the following kernel logs for NIC driver are seen :

 

VM#0 :

[    1.253643] mlx5_core 0000:00:06.0: firmware version: 14.28.1300

[    1.550435] mlx5_core 0000:00:06.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0)

[    1.552553] mlx5_core 0000:00:06.0: Assigned random MAC address ea:e9:85:16:b2:55

[    1.719045] mlx5_ib: Mellanox Connect-IB Infiniband driver v5.0-0

[    1.719370] mlx5_core 0000:00:06.0 ens6np0: renamed from eth0

[   19.606686] mlx5_core 0000:00:06.0 ens6np0: Link up

amd@ubuntu:~$

 

amd@ubuntu:~$ dmesg | grep -i SEV
[    0.198894] AMD Memory Encryption Features active: SEV

 

amd@ubuntu:~$ lspci
00:06.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function]

 

# ip a

3: ens6np0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether b2:b6:56:32:96:bd brd ff:ff:ff:ff:ff:ff
    inet 10.236.104.245/22 brd 10.236.107.255 scope global ens6np0
       valid_lft forever preferred_lft forever
    inet6 fe80::b0b6:56ff:fe32:96bd/64 scope link 
       valid_lft forever preferred_lft forever
 
Though, the kernel logs for the NIC driver show the assigned random MAC address, I don’t know
why the interface information here shows a different MAC address. Anyway, the link is up and
I am able to do basic n/w traffic i/o : 

 

# amd@ubuntu:~$ ping 10.236.13.216
PING 10.236.13.216 (10.236.13.216) 56(84) bytes of data.
64 bytes from 10.236.13.216: icmp_seq=1 ttl=255 time=0.430 ms
64 bytes from 10.236.13.216: icmp_seq=2 ttl=255 time=0.331 ms

 

Next, launched the 2nd SEV guest and did pass-through of VF (pci device 41:00.7) on port#0 of the Mellanox NIC :

 

# launch-qemu.sh -hda ubuntu1-18.04.qcow2 -sev -passthru 41:00.7

 

While launching the SEV guest, the following kernel logs for NIC driver are seen :

 

VM#1 :

[    1.234266] mlx5_core 0000:00:06.0: firmware version: 14.28.1300
[    1.532295] mlx5_core 0000:00:06.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0)
[    1.534397] mlx5_core 0000:00:06.0: Assigned random MAC address c2:9d:86:74:88:b6
[    1.695438] mlx5_core 0000:00:06.0 ens6np0: renamed from eth0
[    1.698664] mlx5_ib: Mellanox Connect-IB Infiniband driver v5.0-0
[  114.999235] mlx5_core 0000:00:06.0 ens6np0: Link up
amd@ubuntu:~$ 

 

amd@ubuntu:~$ dmesg | grep -i sev
[    0.197561] AMD Memory Encryption Features active: SEV
 
amd@ubuntu:~$ lspci
00:06.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function]

 

# ip a

3: ens6np0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether b2:b6:56:32:96:bd brd ff:ff:ff:ff:ff:ff
    inet6 fe80::b0b6:56ff:fe32:96bd/64 scope link dadfailed tentative 
       valid_lft forever preferred_lft forever
 
Again the kernel logs (above) for the NIC driver show the assigned random MAC address, I don’t know
why the interface information here shows a different MAC address and the same one as VM#0. 
 
This surely causes a warning message, as seen on the kernel logs : 
[   41.572626] IPv6: ens6np0: IPv6 duplicate address fe80::b0b6:56ff:fe32:96bd used by b2:b6:56:32:96:bd detected!
 
Also, this interface does not get an IPv4 address assigned ? 
 
Anyway, the link is up for this VF on VM#1 and I am able to do basic n/w traffic i/o : 

 

amd@ubuntu:~$ ping 10.236.13.216
PING 10.236.13.216 (10.236.13.216) 56(84) bytes of data.
64 bytes from 10.236.13.216: icmp_seq=1 ttl=255 time=0.343 ms
64 bytes from 10.236.13.216: icmp_seq=2 ttl=255 time=0.367 ms

 

As the Mellanox NICs being used here are ConnectX-4, the duplicate MAC address issue looks to be due to explanation above, so now I am configuring (static) Guest MAC address as follows :

 

amd@amd:~$ sudo ip link set dev enp65s0f0 vf 5 mac 00:02:c9:f1:72:ee

 

Launching the 2nd VM :

launch-qemu.sh -hda ubuntu1-18.04.qcow2 -sev -passthru 41:00.7

 

# ip a

….

3: ens6np0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

    link/ether 00:02:c9:f1:72:ee brd ff:ff:ff:ff:ff:ff

    inet 10.236.104.152/22 brd 10.236.107.255 scope global ens6np0

       valid_lft forever preferred_lft forever

    inet6 fe80::202:c9ff:fef1:72ee/64 scope link

       valid_lft forever preferred_lft forever

0 Likes
sinanju
Adept I

Re: Can't run multiple SEV guests using SR-IOV?

I've given up on it. I found docs saying direct PCI passthrough wasn't supported and assumed the delay in response on your end was due to that.

I am able to get things working to the point that basic pings are fine. Once the network connections are put under stress, things go sneakers-up.

It's a shame. I was hoping it would help get around the IOMMU speed limit I can't shake.

0 Likes