I thought it might help to advise my current hardware setup:
My motherboard is a Gigabyte GA-p35 with 8gm ram
CPU is Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz
The card I was thinking to buy was something like this
Gigabyte ATI Radeon HD5870 2GB Eyefinity 6 Edition [R5876P-2GD-B]. (Currently I have an Nvidia 8600 GT).
I'm running all this on Ubuntu Linix 9.04 but shortly to upgrade to 10.04.
Based on this post, it looks like I can share the card between video and opencl.
If yours 5870 is attached to the monitor (so it is primary videocard in the OS) some amount of video memory is always used by the system. For example by the desktop window manager (DWM). So you will never get full video memory for your own purposes. I suppose that disabling aero UI or setting 5870 as auxiliary videocard (in another slot) should free this memory and you can try allocate 1Gb piece in video memory.
And a little advice. You shouldn't try to get the whole memory installed on video card. Driver manages video memory also for its internal services. Render targets, swap chains and so on.
It would be good to confirm that there are no constraints other than that total memory usage cannot add up to 100%.
AFAIK, the ATI cards can be shared between display and computation.But when a display is attached to them the os binds them to watchdog timer which keeps on checking whether display is available.When this timer goes to zero,it interrupts the computation program and restarts the GPU.However, in ATI Stream Release notes you are given instructions to disable the timer.I am not sure but there should be some way to set this timer in Linux also.
that 5 second limit is watchdog timer in windows itself. it restart GPU if it doesn't respond. there is way to disable it.
there is no such timer on Linux.
if you run long running kernels then your GUI will be jerky and bad responsivnes. it is because when on GPU run computation kernel it can't redraw screen. so if your kernel takes 5 second to execute then you do not get 5 second any response except mouse pointer.
>that 5 second limit is watchdog timer in windows itself. it restart GPU if it doesn't respond. there is way to disable it.
>there is no such timer on Linux.
I think you're right (contrary to the Nvidia documentation) because I can find no mention of it anywhere.
Thanks for your help.
I believe it is entirely possible for both ATI and NVIDIA, the reasoning for this is that all DirectX 11 cards support Compute Shaders (which is pretty much just a marketing name provided by Microsoft to sell 'integrated' GPGPU support with DirectX) and DirectX 10 cards support a subset of the Compute Shader specification (limited to number of threads, dispatch, instruction set and type support). As you seem more like a linux fellow, (I myself prefer Windows, but to each his/her own). I assume your using OpenCL/ATI Stream/CUDA, and OpenGL. I'm not quite sure how you would do it on Linux, but I'm sure it's possible since it's supported by the graphics hardware for DirectX 10 & 11.