When running the mxgpu-install.sh -a script on ESXi host, after running the configuration first and rebooting, script indicates that no Virtual Functions exist. I've also installed the vCenter Plugin, and configured server with script to use the plugin to create virtual functions, and that seems to be inoperable.
We have a Dell VRTX Blade Chassis with 3 m520 Blades solely with the purpose of VDI. We have 18 VDIs currently in the environment and we would like to added enhanced graphics to a few of those workstations to start. An AMD FirePro S7150x2 was installed in the chassis and assigned to one blade. I would like to be able to assign a vGPU to workstations.
- AMD FirePro S7150x2
- Dell VRTX Blade Chassis
- Dell m520 Blade Server
- vCenter: 6.5.0 Build 7132210
- ESXi: 6.5.0 Build 8294253
- Radeon Pro Settings Plugins v1.0.1 Beta installed (1.0.0 Production was not functional)
- Enabled IOMMU (Intel VT)
- Enabled SR-IOV
- Enabled above 4G space memory mapping IO.
- Enabled UEFI Boot.
Briefly, after verifying all BIOS settings were correct on the host, I brought up a terminal on the ESXi host. Running the mxgpu_install.sh script per instructions it prompts me to specify how many vGPU to assign to each controller. No matter what I select, the script finishes and indicates settings won't be applied until you do a reboot. I reboot the server and run mxgpu_install.sh -a (to assign GPUs to VMs) and I receive an error that no virtual functions exist. I installed the vCenter plugin and ran script again, specifying that I would use plugin to create virtual functions. When I open the plugin I see the two controllers and a slide bar indicating there are zero VFs per controller. I cannot move the slide bar. I've updated all hardware to latest firmware and drivers.
No Virtual Functions detected. Ensure that GPU's that you want to virtualize are not in Passthrough mode.
Please reboot the system before running "sh mxgpu-install.sh -a"