I have been piggy backing on this, and other related threads started by yurtesen, because I share some of the same problems. I have both a 6670 and 6870 card in an 8120 machine running OpenSuSE 11.4 with the AMD APP 2.6 SDK installed. The 6670 card is the default graphics card, and I wish to use the 6870 as a GPU. Initially neither fglrxinfo nor clinfo recognized the 6870 card, but the instructions by tzachi.cohen to create the xorg.conf file using "aticonfig --initial --adapters=all" have succeeded in making it visible to fglrxinfo. However, it remains hidden to clinfo, and I assume, therefore, OpenCL. What is the purpose of the COMPUTE environment variable, and how should it be set in my situation? Where is this documented, for that matter? I note that there are no man pages for either aticonfig, fglrxinfo, or clinfo on my system, and they appeared to have been installed from the fglrx package. Should there have been man pages for these commands? If not, where can I find them? I have the impression from another thread that accessing the 6870 requires definition and linking of an entirely new X session (as is created when another user logs in) to that card before OpenCL will successfully communicate with it. Is this true? The current xorg.conf file sets the 6670 as display 0, screen 0, and the 6870 card as display 0, screen 1. Shouldn't the 6870 card have a separate display to itself?
By default clinfo (or any other CL application) will see only the device associated with the current X-display. The current X-display is determined by the 'DISPLAY' environment variable. If the DISPLAY environment variable is set to ':0' all devices will be exposed. The COMPUTE environment variable is just an override to the DISPLAY environment variable.
Setting an environment variable in the context of a console process can be made by calling:
This is documented in the SDK release notes:
As for the second issue, how to expose the GPU on a remote SSH session.
Bear two things in mind:
1.) The remote session has to be configured to use the local X server, e.g no '-X' as a command line argument .
2.) The machine has to be configured to privilege remote sessions to access the local X server. How to set this changes from one Linux distribution to another.
My thanks to Tzachi.Cohen for the instructions on the use of COMPUTE. clinfo now sees the CPU and both GPUs, and I have been able to run the MatrixMultiplication sample on all three bits of hardware. I see from another thread, just started, that I can restrict GPU computation to only one of the GPUs by setting COMPUTE=:0.0 or 0.1. Thanks again.
You were right! I was missing
in /etc/gdm/Init/Default (the link you gave says xhost + but +local: seems more secure and works...)
I had to setup COMPUTE=:0 also in a script at /etc/profile.d/ folder...
Now, I have the same problem on Ubuntu but I am not sure which file to edit for lightdm... any ideas?