cancel
Showing results for 
Search instead for 
Did you mean: 

Archives Discussions

colinh
Adept I

Resolving Driver Conflicts in Xorg -> Running 2 Discrete Video Cards, Primary Nvidia Secondary AMD

I've been trying to get this setup to work for almost a week now, All I want to do is use my Nvidia GTX 670 as my primary video card for display with occasional computing with CUDA, and use my AMD R9 280X in the secondary PCIe slot primarily for OpenCL use. I have managed to set this up with minor issues in Windows 7 where there aren't any major conflicts with the Nvidia and AMD Catalyst drivers (the only setback here is that a dummy plug or monitor must be connected to the AMD card otherwise its sensors can't be read from any program). I would really like to get this working under linux though, as that is my main OS and where I conduct all my development; Windows is rather useless to me other than for gaming.

I have tried every configuration option I could think of to get this running, but it seems the official AMD driver (which is needed for openCL to work) is very inflexible and incompatible with any other vendors graphics drivers loaded into Xorg. The main incompatibilities lie in the libgl and libglx modules; while the Nvidia driver has no problem booting with a libgl from ANY vendor and either its own libglx.so or the xserver-xorg-core libglx.so (but not the amd libglx.so), the AMD fglrx driver will not boot with any other libgl or libglx other than its own (does not produce any errors in the xorg.log with a different libglx.so, just hands the xserver on boot and requires a hard reboot).

The closest I have come to solving this issue is booting an xserver with only the primary Nvidia card using the AMD libgl and Nvidia libglx in the xorg.conf, then starting a second xserver session with the fglrx driver and the amd libgl and libglx modules on another TTY with output to a second monitor. I then run the commands "export DISPLAY=:1" and "export COMPUTE=:1" (both commands must be run for programs to fully recognize the AMD card) in a terminal window on this xserver session (2nd monitor) and I can then use the AMD card fully including for openCL. The only problem with this setup is that when I switch back to my primary TTY xserver session, I get a GPU idle warning after 60 seconds and the openCL application running on the AMD card shutsdown until I switch back to that xsession.

Is there any way to run both of these cards together in the same xorg session? The only possible way that I could think of is if I could somehow load 2 GLX modules on xorg startup to clear up the conflict or get the fglrx driver to agree on a third party GLX like the Nvidia card does, but I don't know how I would go about doing this. Why is it that the fglrx driver will not cooperate with any other libglx.so or libgl but its own? If the official Nvidia driver is able to work with other 3rd party modules, I don't understand why it would be so difficult for the AMD driver to do the same..

Here are the last lines reported in my Xorg logs for the different 3rd party GLX module configurations (Nvidia, Xorg Foundation) below, as you can see the fglrx driver hangs the xserver requiring a hard reset with every other non-AMD GLX module, without reporting any errors:

Nvidia GLX:

[   844.888] (II) fglrx(0): Kernel Module Version Information:

[   844.888] (II) fglrx(0):     Name: fglrx

[   844.888] (II) fglrx(0):     Version: 13.25.5

[   844.888] (II) fglrx(0):     Date: Dec  6 2013

[   844.888] (II) fglrx(0):     Desc: AMD FireGL DRM kernel module

[   844.888] (II) fglrx(0): Kernel Module version matches driver.

[   844.888] (II) fglrx(0): Kernel Module Build Time Information:

[   844.888] (II) fglrx(0):     Build-Kernel UTS_RELEASE:        3.11.0-15-generic

[   844.888] (II) fglrx(0):     Build-Kernel MODVERSIONS:        yes

[   844.888] (II) fglrx(0):     Build-Kernel __SMP__:            yes

[   844.888] (II) fglrx(0):     Build-Kernel PAGE_SIZE:          0x1000

[   844.888] (II) fglrx(0): [uki] register handle = 0x00004000

Xorg GLX:

[ 20571.689] (II) fglrx(0): Kernel Module Version Information:

[ 20571.689] (II) fglrx(0):     Name: fglrx

[ 20571.689] (II) fglrx(0):     Version: 13.25.5

[ 20571.689] (II) fglrx(0):     Date: Dec  6 2013

[ 20571.689] (II) fglrx(0):     Desc: AMD FireGL DRM kernel module

[ 20571.689] (II) fglrx(0): Kernel Module version matches driver.

[ 20571.689] (II) fglrx(0): Kernel Module Build Time Information:

[ 20571.689] (II) fglrx(0):     Build-Kernel UTS_RELEASE:        3.11.0-15-generic

[ 20571.689] (II) fglrx(0):     Build-Kernel MODVERSIONS:        yes

[ 20571.689] (II) fglrx(0):     Build-Kernel __SMP__:            yes

[ 20571.689] (II) fglrx(0):     Build-Kernel PAGE_SIZE:          0x1000

[ 20571.689] (II) fglrx(0): [uki] register handle = 0x00ade000

[ 20571.689] (II) fglrx(0): DRI initialization successfull

[ 20571.689] (II) fglrx(0): FBADPhys: 0xf400000000 FBMappedSize: 0x010e0000

Any insights on how to solve this problem?

0 Likes
1 Solution
colinh
Adept I

Ok, I ended up solving this problem on my own. This was by far the worst configuration nightmare I've ever endured, having spent dozens of hours researching for solutions (came across many people who had the same situation and none who succeeded), configuring and reconfiguring every aspect of the X window system and drivers, but I have finally found a workable solution. 

The gist of the solution is that I had to separate the two video card devices into their own xorg.conf file with own working serverlayout/device/monitor/screen sections, configure lightdm.conf to work in multiseat mode, install synergy to work as a virtual KVM so that I could use one mouse/keyboard with both xsessions without them being active on both screens at the same time, setup shell scripts to set xhosts + LOCAL:, export COMPUTE=:1 on the 2nd monitor and start the synergy server and client instances, install xserver-xorg-input-void and use as driver for keyboard/mouse on the amdxorg.conf file, while setting options to prevent auto loading extra input devices to prevent cursor/keyboard duplication on 2nd monitor and let synergy take care of it.

The end result is that I now have each video card running its own xsession on its own monitor with the nvidia card in the primary slot driving my primary monitor. ALL I have to do to work on the AMD card is move my mouse cursor off the edge of my primary display onto the other monitor like an extended desktop and I can start my openCL apps on the AMD card. I will post all my configuration files later on in this thread for reference of anyone who also needs to solve this problem in the future.

View solution in original post

0 Likes
5 Replies
nou
Exemplar

you should switch it around. that mean use 280X as display and leave geforce only for computing purpose. fglrx need running Xserver to operate properly. nVidia driver can use GPU to compute without it Xserver.

0 Likes

I know having the AMD card in primary slot will work (Nvidia even has a special option for xorg.conf to load the driver without monitor attached), however this will defeat the whole purpose of me buying this card, as I got it specifically for this task as it provides more than 3x the compute performance of the similar priced Nvidia card. I don't see why there cannot be a workable solution for this reverse card order, and that is what I am looking for.

0 Likes
colinh
Adept I

Ok, I ended up solving this problem on my own. This was by far the worst configuration nightmare I've ever endured, having spent dozens of hours researching for solutions (came across many people who had the same situation and none who succeeded), configuring and reconfiguring every aspect of the X window system and drivers, but I have finally found a workable solution. 

The gist of the solution is that I had to separate the two video card devices into their own xorg.conf file with own working serverlayout/device/monitor/screen sections, configure lightdm.conf to work in multiseat mode, install synergy to work as a virtual KVM so that I could use one mouse/keyboard with both xsessions without them being active on both screens at the same time, setup shell scripts to set xhosts + LOCAL:, export COMPUTE=:1 on the 2nd monitor and start the synergy server and client instances, install xserver-xorg-input-void and use as driver for keyboard/mouse on the amdxorg.conf file, while setting options to prevent auto loading extra input devices to prevent cursor/keyboard duplication on 2nd monitor and let synergy take care of it.

The end result is that I now have each video card running its own xsession on its own monitor with the nvidia card in the primary slot driving my primary monitor. ALL I have to do to work on the AMD card is move my mouse cursor off the edge of my primary display onto the other monitor like an extended desktop and I can start my openCL apps on the AMD card. I will post all my configuration files later on in this thread for reference of anyone who also needs to solve this problem in the future.

0 Likes

>I will post all my configuration files later on in this thread for reference of anyone who also needs to solve this problem in the future.

yes please

0 Likes
colinh
Adept I

Here is a more detailed guide on how to achieve this setup with the contents of all my configuration files.

So, in my system I have my Nvidia GTX 670 in the primary PCIe slot, and the AMD R9 280x in the secondary PCIe slot; I also have an integrated intel GPU on the motherboard (P8 Z77-V Pro) which is currently disabled in the BIOS (though I identify it in my primary seat xorg.conf file which allows me to switch from the Nvidia card to the integrated GPU if I need to). The distribution I am using is Kubuntu 13.10, but this should work with any modern distro that you choose to use.

I started by installing the latest proprietary 331 Nvidia driver. Let Nvidia auto config the xorg.conf file for the video card, then back the file up somewhere safe, also backup the /usr/lib/libgl.so installed by the nvidia driver just in case you want to use it later. Next, download and install the latest proprietary fglrx AMD driver. This will replace the Nvidia /usr/lib/libgl.so with a symlink pointing to the fglrx libgl.so and the /usr/lib/xorg/modules/extensions/libglx.so with a symlink pointing to the fglrx version of libglx.so. The Nvidia libglx.so is renamed by the AMD installer and should still be in the /usr/lib/xorg/modules/extensions/ directory under some variation of renamed.libglx.so. The file that the new libglx.so symlink points to should have been installed one directory up in the /usr/lib/xorg/modules/extensions/fglrx directory under the name fglrx-libglx.so; copy this file to a new module directory (I put mine in /usr/lib/xorg/ati/modules) and rename the file to libglx.so. Edit the libglx.so symlink in /usr/lib/xorg/modules/extensions to point to the renamed nvidia libglx.so in the same directory. Next, run the command: "lspci | grep VGA" to get the busIDs of your video cards, for me this gives the following:

01:00.0 VGA compatible controller: NVIDIA Corporation GK104 [GeForce GTX 670] (rev a1)

02:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Tahiti XT [Radeon HD 7970/R9 280X]

So, looking at the first column, the GTX 670 is in 01:00.0 and the AMD is in 02:00.0. This info is needed for the xorg.conf files.

Rename the current /etc/X11/xorg.conf file created by the amd installer to amdxorg.conf, and copy your backed up nvidia xorg.conf back into the /etc/X11 directory. We have to use seperate xorg.conf files for each video card because the AMD fglrx drivers do not cooperate with any other video hardware drivers and in my experience will silently crash Xorg on boot, requiring a hard reset if you try this. Next thing to do is run "sudo apt-get install xserver-xorg-input-void" to install a dummy driver for your input devices, this is important if you want to use one keyboard/mouse with both monitors in this setup. You also want to install Synergy to enable keyboard/mouse sharing between monitors in this multiseat configuration. Open up your /etc/X11/xorg.conf file for editing, this is what mine looks like:

Section "ServerLayout"

  Identifier    "Layout0"

        Screen      0 "Screen1"

        Option      "Xinerama" "0"

EndSection

Section "Files"

    ModulePath "/usr/lib/xorg/modules/"

EndSection

Section "Monitor"

    Identifier    "Monitor0"

    VendorName    "Unknown"

    ModelName      "Unknown"

    HorizSync      28.0 - 33.0

    VertRefresh    43.0 - 72.0

    Option        "DPMS"

EndSection

Section "Monitor"

    Identifier    "Monitor1"

    VendorName    "Unknown"

    ModelName      "Unknown"

    HorizSync      28.0 - 33.0

    VertRefresh    43.0 - 72.0

    Option        "DPMS"

    Option        "DPI" "96 x 96"

EndSection

Section "Device"

    Identifier    "iGPU"

    Driver        "intel"

    BusID          "PCI:0:2:0"

EndSection

Section "Device"

    Identifier    "Device1"

    Driver        "nvidia"

    BusID          "PCI:1:0:0"

    VendorName    "NVIDIA Corporation"

    BoardName      "GeForce GTX 670"

    Option        "coolbits" "5"

EndSection

Section "Screen"

    Identifier    "Screen0"

    Device        "iGPU"

    Monitor        "Monitor0"

    DefaultDepth    24

    SubSection    "Display"

        Depth      24

    EndSubSection

EndSection

Section "Screen"

  Identifier "Screen1"

  Device    "Device1"

  Monitor    "Monitor1"

  DefaultDepth    24

  Option    "Stereo" "0"

  Option    "SLI" "Off"

  Option    "BaseMosaic" "off"

  SubSection "Display"

  Depth    24

  EndSubSection

EndSection

You can ignore the italicized portions as those are specific to my integrated GPU in case I want to use it in the future, I can switch over with minimal work. Also, the Option        "coolbits" "5" under the nvidia device section is optional, this just allows you to take manual control over the clockspeed and fans of your video card in the Nvidia X Server Settings application. If you came up with a different BUS ID than I did with the previous lspci command, change that in the device section where it says BusID          "PCI:1:0:0", also change anything else that doesn't match up with your specific hardware (most settings generated by the nvidia auto config should be fine). After you have adjusted this file, move on to your amdxorg.conf file:

Section "ServerLayout"

  Identifier    "Layout1"

  Screen      0  "aticonfig-Screen[0]-0" 0 0

        InputDevice    "Keyboard1" "CoreKeyboard" #specify the named input device from the sections below where the void module is loaded for keyboard/mouse

        InputDevice    "Mouse1" "CorePointer"

        Option        "Xinerama" "0" #disable xinerama

        Option        "AutoAddDevices" "0" #stop xorg from automatically configuring input modules

        Option        "AutoEnableDevices" "0" #stop xorg from auto enabling input devices

EndSection

Section "Files"

    ModulePath "/usr/lib/xorg/ati/modules,/usr/lib/xorg/modules/" #specify the directory location of the AMD libglx.so module that you moved earlier, then the default xorg module directory so additional modules can be found

EndSection

Section "Module"

    Disable "evdev" #disable loading of the evdev module, this module will attempt to take control of your input devices (keyboard/mouse), if you allow it to load, you will end up with a double cursor on your screen

EndSection

Section "InputDevice" #load the previously installed void module for your keyboard here.

    Identifier      "Keyboard1"

    Driver          "void"

    Option          "SendCoreEvents" "1"

EndSection

Section "InputDevice" #load the void module for your mouse too.

    Identifier      "Mouse1"

    Driver          "void"

    Option          "SendCoreEvents" "1"

EndSection

Section "Monitor"

  Identifier  "aticonfig-Monitor[0]-0"

  Option    "VendorName" "ATI Proprietary Driver"

  Option    "ModelName" "Generic Autodetecting Monitor"

  Option    "DPMS" "true"

EndSection

Section "Device"

  Identifier  "aticonfig-Device[0]-0"

  Driver      "fglrx"

        Option      "Monitor-DFP10" "aticonfig-Monitor[0]-0"

  BusID      "PCI:2:0:0" #change this to your correct BUSID found with lspci earlier

EndSection

Section "Screen"

  Identifier "aticonfig-Screen[0]-0"

  Device    "aticonfig-Device[0]-0"

  Monitor    "aticonfig-Monitor[0]-0"

  DefaultDepth    24

  SubSection "Display"

  Viewport  0 0

  Depth    24

  EndSubSection

EndSection

As you can see, there are a lot more changes to be made in this file. I have added comments after the important lines to change to make it easier to follow along.

As far as I know, it is impossible to switch versions of /usr/lib/libgl.so between Xsessions (If I am wrong and you have found a way, please let me know) like you can switch the libglx.so module by specifying different module directories in the xorg.conf (as we have done above); luckily the Nvidia driver is very forgiving and will let you run it with the AMD version of libgl.so (unlike the fglrx driver, which does not allow any 3rd party driver components).

After you have finished with your xorg.conf files, move on to your /etc/lightdm/lightdm.conf file to match the following:

[LightDM]

minimum-display-number=0

minimum-vt=7

[SeatDefaults]

xserver-command=/usr/bin/X

user-session=kde-plasma

greeter-session=lightdm-kde-greeter

[Seat:0]

xserver-command=/usr/bin/X :0

xserver-config=xorg.conf

display-setup-script="/etc/lightdm/seat0.sh"

[Seat:1]

xserver-command=/usr/bin/X :1 -sharevts -novtswitch

xserver-config=amdxorg.conf

display-setup-script="/etc/lightdm/seat1.sh"

autologin-user=amdgpu #change to whatever user you want to be logged in on 2nd seat

Save this file and create a new file called /etc/lightdm/seat0.sh:

#! /bin/sh

sleep 10

xhost + LOCAL:

/usr/bin/killall synergys

sleep 1

/usr/bin/synergys -n main -a 127.0.0.1 --display :0 --restart

Save this file then run "sudo chmod uog+x /etc/lightdm/seat0.sh"

Create a new file called /etc/lightdm/seat1.sh:

#! /bin/sh

xhost + LOCAL:

/usr/bin/killall synergyc

sleep 1

/usr/bin/synergyc -n child --display :1 --restart 127.0.0.1

Save this file then run "sudo chmod uog+x /etc/lightdm/seat1.sh"

To explain these files, they are used to start a Synergy server and client instance on each seat, synergy sometimes autostarts on boot, so we have to kill any existing instances before we start them with our settings. Synergy wasn't really designed for what I am using it for, it is usually used for remote controlling desktops on another computer, so it sometimes will crash when you try to load both the client and server; this is the reason for the --restart flags which will auto restart the process if it crashes and also for the "sleep 10" on the seat0.sh, when I tried to load both synergyc and synergys at the same time or even in the same shell script, synergys would not start at all, but if I give it some sleep time before starting, it allows the client to load first and then the server does not have any problems with starting.

I'm pretty sure that I've covered everything here, just hookup your main monitor to your nvidia card and secondary monitor to your amd card and you should be all set. One small issue that I have with this setup (and you will likely encounter) is that on boot, the primary display will auto-switch to Virtual Terminal 1 when the seat1 xsession is loaded and again when seat1 is logging in; this is flaw in lightdm that does not have a solution at the moment (though I have read that it is being worked on), as each new xsession is assigned a new VT and in the process, the main terminal is switched back to VT1. This is a minor annoyance though, as you can easily switch back to your primary xsession on seat0 with Ctrl+Alt+F7.