cancel
Showing results for 
Search instead for 
Did you mean: 

Archives Discussions

emuller
Journeyman III

Any got > 4 GPUs for Linux?

Hi All,

I have succeeded in configuring and using with CAL 2 4870x2s running Ubuntu 8.10 64bit, kernel 2.6.27-14-generic and Xorg 1.5.2 and catalyst 9.5. 

Adding further cards causes the system to freeze though the Xorg.0.log shows no serious errors or segfaults, which would be present if I were attempt to use Jaunty (see previous post: http://forums.amd.com/devforum/messageview.cfm?catid=328&threadid=112537&highlight_key=y&keyword1=ja... .

aticonfig --lsa
* 0. 03:00.0 ATI Radeon HD 4870 X2
  1. 04:00.0 ATI Radeon HD 4870 X2
  2. 07:00.0 ATI Radeon HD 4870 X2
  3. 08:00.0 ATI Radeon HD 4870 X2
  4. 0e:00.0 ATI Radeon HD 4850 X2
  5. 0f:00.0 ATI Radeon HD 4850 X2

I have tried both an ASRock x58 deluxe and an MSI GD70-790fx motherboard, which should both fit 4 such cards in theory.

 

Might this be a linux driver issue, hardware issue, kernel version?

 

Thanks for any pointers ...

 

0 Likes
24 Replies

Looks like a driver issue. Check this link -- http://forums.guru3d.com/showthread.php?p=2862271 or search for "4x4870x2" with Google. I wasn't able to find any link to system with > 4 ATI GPUs while 8x CUDA ones are pretty common these days.

0 Likes

Indeed I'm jeleous of my CUDA couterparts!  Any indication if and when  4x4870x2 will be addressed (specifically for Linux64) ? I have 2x 4870x2 cards sitting in boxes, so to speak.  It would be a shame to have to post them on ebay.

 

0 Likes

Hi, I checked with the driver team and they systems they've tested on with 8 GPUs (not necessarily 4 X2s) and they were running RHEL 5.3 and SUSE 11.0. Any chance you can give it a try on either of those? Also, they told me you need to make sure it is a 64-bit system (which it looks like you already have done). For RHEL 5.3, another possibility, although not officially support but is supposed to be the same thing, is CentOS 5.3.

0 Likes

Using SUSE Enterprise Linux 11.0 and the newest Catalyst 9.6, I am still getting the same behaviour:

2 x2 cards work for a total 4 GPUS, adding another card (and proceeding as for going from 1 card to 2 cards: aticonfig --initial --adapter=all -f (for newest Catalyst 9.6 causes amdpcsdb to be deleted)) causes lock-up such that external ssh no longer works, but num-lock is still toggle-able.  Same as for ubuntu intrepid.

The Xorg.0.log final message is as follows:

(II) Loading /usr/lib/xorg/modules//amdxmm.so
(II) Module amdxmm: vendor="X.Org Foundation"
        compiled for 1.4.99.906, module version = 1.0.0
(II) Loading extension AMDXVOPL
(II) fglrx(0): Enable composite support successfully
(WW) fglrx(0): Option "VendorName" is not used
(WW) fglrx(0): Option "ModelName" is not used
(II) fglrx(0): X context handle = 0x1
(II) fglrx(0): [DRI] installation complete

 

Where as for 4 cards (which works)

 

II) fglrx(0): Enable composite support successfully
(WW) fglrx(0): Option "VendorName" is not used
(WW) fglrx(0): Option "ModelName" is not used
(II) fglrx(0): X context handle = 0x1
(II) fglrx(0): [DRI] installation complete
(==) fglrx(0): Silken mouse enabled
(==) fglrx(0): Using HW cursor of display infrastructure!
(==) fglrx(0): Using software cursor
(II) fglrx(0): RandR 1.2 enabled, ignore the following RandR disabled message.
(II) fglrx(0): atiddxDisplayScreenLoadPalette: numColors: 256
(--) RandR disabled
(II) fglrx(1): driver needs X.org 1.4.x.y with x.y >= 99.906
(II) fglrx(1): detected X.org 7.4.2.0
(II) fglrx(1): doing DRIScreenInit
(II) fglrx(1): DRIScreenInit for fglrx driver
drmOpenDevice: node name is /dev/dri/card0
drmOpenDevice: open result is 16, (OK)
....

 

NB: With the new Catalyst 9.6, I had to use a kvm switch and attach an input to each GPU for them to be activated by X.  I have a machine running 9.5 with 2 4850x2 (4 GPUs) which does not require an input per GPU, but to get that to work I had to keep amdpcsdb under revision control and manually edit it after adding the second card (aticonfig --initial --adapter=all -f does not delete amdpcsdb, unlike the new behaviour for 9.6)

 

Hardware:  MSI gd70-790fx (latest bios), 3x Sapphire 4850x2s 2GB ram, Phenom II 955

 

 

 

 

0 Likes

I now have 5 GPUs running in one system:

+ dual 4850x2s = 4 GPUS

+ gtx260 = 1 GPU

This puts to rest previous concerns that it might be a 8x slot issue.  the 4850x2s here are in slots 0 and 1, which on my MSI gd70-790fx puts them in x8 mode.  CUDA and Stream play fine together, as CUDA does not need X loaded as previously discussed here:

http://forums.amd.com/devforum/messageview.cfm?catid=328&threadid=113751

In fact, I guess I can get them to play together in the same program ... once I get around to installing pycuda ...

The conclusion:  At the time of writing, >4 GPUs is not possible on Linux 64bit without using at least a few NVIDIA cards.  Probable cause is ATI driver issues.

That CUDA runs in runlevel 3 is a BIG plus.  It would be great if the same feature were available for ATI cards.

 

 

 

 

 

0 Likes

Hi emuller this is my first post of these forums.

I expected to work around multi-gpu support issues with virtualization.

I have in the past run 16 separate instances of Linux under the micro-kernel L4KA.

I am hoping to just do the same with 8 gpu's in my box.

8 GPU = 8 instances of Linux 8 Xservers each with a different PCI address.

(acually the x2 cards make this 4 instances of Linux)

Might work. Especially with mix and match hardware.

I use MPI so it's not too much an issue to have logical 'boxes' on the same machine.

-and those instances are stripped to nothing but application and xserver...

0 Likes

arreaux:  virtualization is an interesting idea.  I was not aware that one could have direct access to PCIe.

I would go with 2 vm's, and 2 xservers ... since for each xserver presumably you would need a monitor (or kvm) attached to its primary board.  8 such cables is getting annoying.

This solution is perhaps a bit exotic for me ... but perhaps you could point me to a link where I could try out linux on l4ka?  I assume its not a standard distro ... which will be a problem for me ... I need lots of python packages for my applications which are a pain to build from source.

NB: I tried running just 2 Xservers (no VMs), but fglrx does not support this (and crashes)

 

 

 

0 Likes

I now have a box with installations of:

SHEL 11

Debian lenny

Ubuntu 8.10

Gentoo

RHEL 5.3

I have attempted >4 GPUS on all, and for those that work at all for <4 GPUs (Ubuntu 8.10, SHEL 11, RHEL 5.3)  still none could exceed the >4 GPU boundary.  Crash characteristics are similar for these three, as described below for RHEL 5.3:

Following the procedure to add 1-4 cards from fresh install of RHEL 5.3, yum update, reboot, and ati driver install via

$ sh ati-driver-installer-9-6-x86.x86_64.run

reboot or no reboot

$ aticonfig --lsa

* 0. 03:00.0 ATI Radeon HD 4850 X2
  1. 04:00.0 ATI Radeon HD 4850 X2
  2. 07:00.0 ATI Radeon HD 4850 X2
  3. 08:00.0 ATI Radeon HD 4850 X2
  4. 0d:00.0 ATI Radeon HD 4850 X2
  5. 0e:00.0 ATI Radeon HD 4850 X2

$ rm /etc/X11/xorg.conf

$ touch /etc/X11/xorg.conf

$ aticonfig --initial --adapter=all -f

$ startx

For up to 4 GPUs installed in the motherboard, X will start, amdcccle will reveal monitors 1,{3,5,7}, and cal will detect all installed devices.

For >4 GPUs, the screen will flicker modes, freeze on a black screen.  External ssh sessions will freeze.  However, the keyboard is not frozen, as num-lock is still toggle-able.

Final words revealed in Xorg.0.log


(II) LoadModule: "glesx"
(II) Loading /usr/lib64/xorg/modules/glesx.so
(II) Module glesx: vendor="X.Org Foundation"
        compiled for 7.1.0, module version = 1.0.0
        ABI class: X.Org Server Extension, version 0.3
(II) Loading extension GLESX
(II) fglrx(0): GLESX enableFlags = 90
(II) fglrx(0): Using XFree86 Acceleration Architecture (XAA)
        Screen to screen bit blits
        Solid filled rectangles
        Solid Horizontal and Vertical Lines
        Driver provided ScreenToScreenBitBlt replacement
        Driver provided FillSolidRects replacement
(II) fglrx(0): GLESX is enabled
(II) LoadModule: "amdxmm"
(II) Loading /usr/lib64/xorg/modules/amdxmm.so
(II) Module amdxmm: vendor="X.Org Foundation"
        compiled for 7.1.0, module version = 1.0.0
        ABI class: X.Org Server Extension, version 0.3
(II) Loading extension AMDXVOPL
(II) fglrx(0): Enable composite support successfully
(WW) fglrx(0): Option "VendorName" is not used
(WW) fglrx(0): Option "ModelName" is not used
(II) fglrx(0): X context handle = 0x1
(II) fglrx(0): [DRI] installation complete

I am using a MSI gd70-790fx motherboard with the most recent BIOS, a Phenom II 955, Catalyst 9.6, RHEL 5.3 64bit with all updates applied,  3 x identical sapphire 4850x2 cards.  Thermaltake 1500w power.  I have also attempted 5 x 4850 via 2 x 4850x2 and 1 Gigabyte 4850 with same results.

I have tried with success to run two 4850x2 with a nvidia gtx260 (above), therefore I conclude it is neither a motherboard nor power issue.

I have also tried an ASRock x58 deluxe i7 motherboard with Ubuntu 8.10 and Catalyst 9.5, our present production system.  The 4 GPU limit remains, with similar freezing characteristics on starting X.

I am running a fresh install of RHEL 5.3 64bit, the only officially Linux for the drivers, with only ATI Catalyst 9.6 installed.  Therefore I conclude it is not a OS issue. 

I can run up to 4 GPUs no problem, therefore I conclude it is not a configuration issue, except that perhaps special black-magic driver options may be required for >4 GPUs.

By process of elimination, the most likely cause is that Catalyst 9.6 is incapable of handling more than 4 GPUs on most major Linux distros, probably all.

@ Michael Chu: IF it is not the case, i.e. that >4 GPUs ARE known to work on Linux in the AMD/ATI lab, would it be possible to provide more details concerning hardware and software configurations such that >4 GPUs can be reproduced in the wild?

Thanks in advance for your help in using AMD/ATI hardware to its full potential.

 

 

 

 

 

0 Likes

Hi emuller...

I'm sure the limitations you describe are there.

What we are trying to do is maximize compute power in one box, so it seems logical to use all the PCIe slots available.

That's why I have a MSI K9A2 Platinum and big honking Ultra 1600w psu.

Lets realiize what we're doing here.

This is cutting edge stuff and we're trying to run the Indy 500 by building a car from scratch.

Part of my scratch built solution is L4Ka::Pistachio.

Go to http://l4hq.org/  for details.

and also: http://l4ka.org/

there is a cd available from TU Dresden of thier flavor of L4:Fiasco

at: http://os.inf.tu-dresden.de/L4/LinuxOnL4/demo.shtml

interesting stuff YMMV yada yada...

Dave

0 Likes

Hi again emuller...

I had a problem when I first put a 3870x2 in my box.

running radionHD driver the xconfigure was messed up.

I had to manually edit which GPU the driver was pointing to.

that x2 board showed both GPUs, the PLX chip and the onboard sound all on the PCI bus.

Only one of the GPU's is directly accessable..so check the xorg.conf file and make sure that it tries to access the right GPU per board. (this is a guess based on 3870x2 and may be changed in 4870x2)

0 Likes

Originally posted by: arreaux Hi again emuller...

 

I had a problem when I first put a 3870x2 in my box.

 

running radionHD driver the xconfigure was messed up.

 

I had to manually edit which GPU the driver was pointing to.

 

that x2 board showed both GPUs, the PLX chip and the onboard sound all on the PCI bus.

 

Only one of the GPU's is directly accessable..so check the xorg.conf file and make sure that it tries to access the right GPU per board. (this is a guess based on 3870x2 and may be changed in 4870x2)

 

I have dual 4850x2 running in production systems, so I guess I'm past these problems you are referring to.  Indeed you are right, the amdpcsdb stores settings per PCIe ID, and adding additional cards can shift PCIe IDs and screw things up for Catalyst <=9.5.  Catalyst 9.6 "addresses" this by simply deleting amdpcsdb for the "aticonfig --initial --adapter=all -f" command which should be called when you install an additional card.

 

 

 

 

 

0 Likes

Hi again emuller

heres the Ultra 1600w PSU

http://www.tigerdirect.com/applications/SearchTools/item-details.asp?EdpNo=2937371&CatId=2535

117amps of single rail 12 volt power (1400w)

buy this first then find a case to fit. I have a Lian-Li V1200 plus II and had to remove one of the lower drive racks to make it fit.

so much for hardware geeking...

Dave

0 Likes

Hi Dave,

You have a MSI K9A2 P?  Good choice, as it has proven itself well amongst 4doublewide MBs in the folding @ home world ... 

http://atlasfolding.com/?page_id=148

Have you tried double and triple 48X0x2 yet?  Can you confirm you are seeing the same issues I am with that MB?

I'm assuming things are the same in XP or Vista as for Linux, but I haven't tried it.  Have you ?

Some nice info on 4doublewide setups using nvidia cards, but much of it applies for ATI cards as well:

http://www.nvidia.com/object/tesla_build_your_own.html

BTW who makes a 1600W power supply?

 

 

0 Likes

Hi Dave, emuller

This thread is very interesting, and I would like to share my experience. I was trying for a while to have >4GPUs. I have succeded in having 4 GPUs setup (2x4870X2) working in a MSI K9A2P(PhenomX4 9950BE), OpenSuse 11.0 x86_64, Catalyst 9.6... but only if those were in full PCIe x16 slots (x16 physical/x8 electrical). My PSU is a XILENCE 1200W GAMER EDITION.

If you don't care, I am very interested to know:

- The exact MOBO, CPU and GPU brand you're using.

- if you succeded to boot a dual GPU card (4870x2/4850x2) in a x16 physical/x8 electrical PCIe slots, specially in the case of MSI K9A2-P lighter slots.

- your experience in HW partition for linux virtualization (great idea!) and/or links.

- Have you been forced anytime to reflash card firmware ?

- Water-cooling anyone?

I have one 4870X2 waiting in the box... Sadly, I have only seen just vague responses from Michael Chu(AMD) to this specific question. But from our experiece, it seems that if you want serious GPU setups (6-8 GPUs)  NVIDIA is the answer.

FYI, instead of an expensive high-capacity PSU, have you considered the use of dual PSU setups, like this cool guy made ( NVIDIA too 😞 )

http://estoniadonates.wordpress.com/2009/04/02/dual-psu-pc-system/

Keep up the good work!

0 Likes

Hi!

I have succeded with a different PSU to get to your state... I was able to get boot three 4870X2 cards, one of them in a x8 PCIe slot. The output of lspci

00:00.0 Host bridge: ATI Technologies Inc RD790 Northbridge only dual slot PCI-e_GFX and HT3 K8 part
00:02.0 PCI bridge: ATI Technologies Inc RD790 PCI to PCI bridge (external gfx0 port A)
00:03.0 PCI bridge: ATI Technologies Inc RD790 PCI to PCI bridge (external gfx0 port B)
00:05.0 PCI bridge: ATI Technologies Inc RD790 PCI to PCI bridge (PCI express gpp port B)
00:09.0 PCI bridge: ATI Technologies Inc RD790 PCI to PCI bridge (PCI express gpp port E)
00:0b.0 PCI bridge: ATI Technologies Inc RD790 PCI to PCI bridge (external gfx1 port A)
00:12.0 SATA controller: ATI Technologies Inc SB600 Non-Raid-5 SATA
00:13.0 USB Controller: ATI Technologies Inc SB600 USB (OHCI0)
00:13.1 USB Controller: ATI Technologies Inc SB600 USB (OHCI1)
00:13.2 USB Controller: ATI Technologies Inc SB600 USB (OHCI2)
00:13.3 USB Controller: ATI Technologies Inc SB600 USB (OHCI3)
00:13.4 USB Controller: ATI Technologies Inc SB600 USB (OHCI4)
00:13.5 USB Controller: ATI Technologies Inc SB600 USB Controller (EHCI)
00:14.0 SMBus: ATI Technologies Inc SBx00 SMBus Controller (rev 14)
00:14.1 IDE interface: ATI Technologies Inc SB600 IDE
00:14.2 Audio device: ATI Technologies Inc SBx00 Azalia
00:14.3 ISA bridge: ATI Technologies Inc SB600 PCI to LPC Bridge
00:14.4 PCI bridge: ATI Technologies Inc SBx00 PCI to PCI Bridge
00:18.0 Host bridge: Advanced Micro Devices [AMD] Family 10h [Opteron, Athlon64, Sempron] HyperTransport Configuration
00:18.1 Host bridge: Advanced Micro Devices [AMD] Family 10h [Opteron, Athlon64, Sempron] Address Map
00:18.2 Host bridge: Advanced Micro Devices [AMD] Family 10h [Opteron, Athlon64, Sempron] DRAM Controller
00:18.3 Host bridge: Advanced Micro Devices [AMD] Family 10h [Opteron, Athlon64, Sempron] Miscellaneous Control
00:18.4 Host bridge: Advanced Micro Devices [AMD] Family 10h [Opteron, Athlon64, Sempron] Link Control
01:00.0 PCI bridge: PLX Technology, Inc. PEX 8647 48-Lane, 3-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ab)
02:04.0 PCI bridge: PLX Technology, Inc. PEX 8647 48-Lane, 3-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ab)
02:08.0 PCI bridge: PLX Technology, Inc. PEX 8647 48-Lane, 3-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ab)
03:00.0 VGA compatible controller: ATI Technologies Inc R700 [Radeon HD 4870 X2]
03:00.1 Audio device: ATI Technologies Inc HD48x0 audio
04:00.0 Display controller: ATI Technologies Inc R700 [Radeon HD 4870 X2]
05:00.0 PCI bridge: PLX Technology, Inc. PEX 8647 48-Lane, 3-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ab)
06:04.0 PCI bridge: PLX Technology, Inc. PEX 8647 48-Lane, 3-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ab)
06:08.0 PCI bridge: PLX Technology, Inc. PEX 8647 48-Lane, 3-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ab)
07:00.0 VGA compatible controller: ATI Technologies Inc R700 [Radeon HD 4870 X2]
07:00.1 Audio device: ATI Technologies Inc HD48x0 audio
08:00.0 Display controller: ATI Technologies Inc R700 [Radeon HD 4870 X2]
09:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 01)
0a:00.0 RAID bus controller: Promise Technology, Inc. PDC42819 [FastTrak TX2650/TX4650]
0b:00.0 PCI bridge: PLX Technology, Inc. PEX 8647 48-Lane, 3-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ab)
0c:04.0 PCI bridge: PLX Technology, Inc. PEX 8647 48-Lane, 3-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ab)
0c:08.0 PCI bridge: PLX Technology, Inc. PEX 8647 48-Lane, 3-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ab)
0d:00.0 VGA compatible controller: ATI Technologies Inc R700 [Radeon HD 4870 X2]
0d:00.1 Audio device: ATI Technologies Inc HD48x0 audio
0e:00.0 Display controller: ATI Technologies Inc R700 [Radeon HD 4870 X2]
0f:00.0 FireWire (IEEE 1394): VIA Technologies, Inc. IEEE 1394 Host Controller (rev c0)

Soo! The output of aticonfig --list-adapters confirmed the status

igimenez@krakken:~> cat out_aticonfig_listadapters.out
* 0. 03:00.0 ATI Radeon HD 4870 X2
  1. 04:00.0 ATI Radeon HD 4870 X2
  2. 07:00.0 ATI Radeon HD 4870 X2
  3. 08:00.0 ATI Radeon HD 4870 X2
  4. 0d:00.0 ATI Radeon HD 4870 X2
  5. 0e:00.0 ATI Radeon HD 4870 X2

* - Default adapter


As Dave said, aticonfig --adapter=all --inital created me a bogus configuration. There was repeated entries and the aticonfig devices in xorg.conf did not point to the correct PCI address /i.e GPU #3 instead of being configured in 08:00.0 it was pointing to 09:00.0 incorrectly (it is an ethernet controller).

It was interesting to note that the output of aticonfig --lsch showed a possible reason for this missalignment

igimenez@krakken:~> aticonfig --lsch
CrossFire chain for adapter 0, status: disabled
  0. 03:00.0 ATI Radeon HD 4870 X2
  1. 04:00.0 ATI Radeon HD 4870 X2
  3. 08:00.0 ATI Radeon HD 4870 X2
  Invalid slave, no matching adapter found at Bus ID 09:00.0

I will let you know more about this

BR

Ivan

0 Likes

Hi emuller,

I'm interested in the multi-gpu idea.  But since each process can only address a single adapter, how do you plan to get them to work in parallel?  MPI?  Could you share some details?

0 Likes

@hagen: Yes, MPI.  My application is fine with low bandwidth and high latency between nodes. As coding for the GPU, this is very problem specific, and requires alot of thought on architecture, memory layout, etc..  Mileage of this solution for you problem may vary.

 

 

0 Likes

@emuller: With one graphics card on each node, I can see how to get MPI to work.  But with multiple gpus on the same node, I am not clear how.  Say, if I have 4 graphics cards on s0, if I start 4 mpi threads on s0, how do I pass different environment varialbes to each of them to target a different card?

0 Likes

@hagen:  To get multi gpus on the same node, each MPI process must know the ID of which GPU to run on (eg 0,1,2,3; have a look in the SDK user guide for how to tell CAL/Brook+ to use GPU ID !=0). 

To assign GPU IDs to MPI processes, compute them on MPI rank=0 and scatter.  The algo is roughly as follows:

1) h=hash key,value pairs (machine-name(or IP), counter=0)

2) ml=List of names for each machine sorted by mpi rank

3) loop through ml, use the machine name(or IP) to get the current value of the counter, that is the GPU ID, then inc the counter by one.

Scatter those GPU IDs to the other nodes.

 

 

0 Likes

@emuller:

Thanks for the instructions.  Just to make sure, I don't have to crossfire the boards to do what you suggest, right?

Also, following up on the earlier discussions on this thread, we got the same motherboard MSI gd70-790fx you have.  From the specs, our tech thinks we can either populate all 4 pci/e slots at 8x speed each, or 2 pci/e slots at 16x speed each. He thinks if we put in 3 4870x2, the motherboard will only be able to communicate with 2 of them, and the third 4870x2 will end up with no communication channel left.  Is this consistent with your experience?

0 Likes

@hagen:

I'm running on linux, and I have crossfire is disabled.  3 4870x2 will cause one pair of cards to run at 8x speed.  However, as discussed in this thread, I've found that the drivers support no more than 2 4870x2 (a hardware issue was ruled out by filling remaining slots with nvidia cards) ... but please do let us know if you can get it to work.  You are running on vista?

 

 

0 Likes

After startx and freeze with 6 GPUs, the kernel is responding to sysreqs ...

Under RHEL 5.3, enable kernel sysrq keys:

echo 1 > /proc/sys/kernel/sysrq

Then:

startx

<freeze>

alt-sysreq-s -> sync disks

alt-sysreq-u -> mount readonly

alt-sysreq-b -> reboot

 

 

0 Likes
emuller
Journeyman III

The newly released Catalyst 9.7 resolves incompatability issues with Ubuntu Jaunty (9.04).

I can confirm I have 2 4850x2s running under Ubuntu 9.04 kernel 2.6.28-11-generic Xorg 1.6.0.

Still no >4 GPU support, but now I can add idle cards to other boxes which I could not downgrade from jaunty.  Progress!

 

 

 

 

0 Likes

How to marry those with nvidia card? I have 2 4850 in my desktop, and I want to try CUDA just for experiment.

Anybody do ati and nvidia in same machine? on windows? any issue?

0 Likes