What is the maximum for
1) max 8 devices [NACK] <-- DISASTER; ABSOLUTE CATASPROPH, MISTAKE !!!!
2) context [ACK] <-- if no limit (not gurananteed!)
3) buffers [ACK] <-- if no limit (not gurananteed!)
4) commandqueues [ACK] <-- if no limit (not gurananteed!)
programmed by AMD for OpenCL in the latest OpenCL SDK ?
What is included to max ?
a) GPUs ?
b) embedded devices ?
c) accelators ?
d) cameras such as used in smart phones ?
IF 8 is the max, that is SERIOUS PROBLEM !!!!
Can AMD verify this 8 max ?
catching Pasific waves to cool down !
ps, this kind of performance issues should have been taken into account right from the beginning of the development !
8 as max for GPUs !!!!???
AMD top guns comments about 8 ?
8 max is disaster for developers !!!
Can't tell 8 max to clients !!!!
~ catching cooler Pasific waves !
ps. this the also performance issue and as we proposed performance
should have been built in right from the beginning to avoid disasters !!!!
Don't forget that most motherboards don't support over 8 PCI-Express and the heat will be a big problem then!
What about distributing your work with Lan ?
Distributing your work woth LAN is not really the option what one wants. I agree that 8 max GPUs is not really enough.
We all remember the problem of Y2K, when everybody was pale as death because the VHS player on the TV could not show dates starting with 2xxx. This was only because nobody thought that they will outlive 2000. But why?! Why do people think that something like that won't happen?
The new types of BIOS (UEFI or something like that) were developed 6 months ago. I would bet a larger amount, that still today nobody thinks ov addressing more than 8 PCI-E devices.
Take this server board for eg. 8 x16 PCI-E expansion slots. Why can't I install 8 dual-gpu cards inside? 16 GPUs. Or why can't I install PCI extension systems?
8*2 dual-gpu cards = 32 devices in one system. I ask: why not? There are many applications that can leverage multi-gpu capability, but communication latency and bandwidth between devices is crucial. Nothing can beat internal PCI-E (neither LAN or even Inifiniband).
About heat issues: I was truly considering installing a water cool radiator in the extra 1U place above the cards (the Tyan beast is a 4U solution), and have the pump installed in the dead space at the front of the case. These things seem so trivial to me, yet nobody supports solutions like this, and if new BIOS-s need hacking yet again... That will be the biggest facepalm ever. To redo something because the old one was not good enough, and make something yet again unusable. The flashy GUI is nice, but feature is welcome also.
Anyhow, hardcoding a maximum of 8 devices into the driver, only because current BIOS don't support it, is a mistake in my point of view. It takes nothing not to hardcode such a thing, and for those who want to buidl real dense, GPU sytems, they need to be able to hack both BIOS and linux drivers. We don't want that...
I remember back when 4 GPUs per MB was the max. Its nice its up to 8 now. 8 GPUs is anyway the max you can fit is most commodity MBs.
Going beyond that would require some special hardware, such as PCIe over cable:
or google the fastra II project for their custom solution.
Of course, all those GPUs have to share the CPU. For my applications there's no point to spread the CPU so thin. I therefore prefer to go beyond 8 with infiniband and MPI ... which is of course fully suported.
In fact, for my applications, I think the mobile Llano Fusion APUs would be the best balance of CPU, GPU, and power consumption and the zero-copy is an big plus. If only there was a nice way to get the density up ... like a micro-ITX with a Llano A8-3000MX series and embedded infiniband (no PCIe card needed) ...
or some backplane system or blade config.
The mother-board you point to won't support 8 dual gpu card without hacking.
As far as i see, the dual gpu card useally take place for 2, so you won't be able to connect them without the use of extenders!
I should add that this MB won't be able to supply all the power required through the Pci-E ports so another extender hacking is required. In the end, the system will be so complex and risky, that i better to pay extra 500$ to purchase another system and connect them through fiberoptic or infiniband.