This content has been marked as final. Show 1 reply
I understand how it could not be financially vaible, but to say that it doens't hold the "riches" of computation is a slight overstatement.
It's not the end all be all for every app, but for some apps it makes perfect sense and it very cost effective.
I honestly believe that the biggest hurdles to GPGPU at the moment are not the limits of computation in theory but the GPGPU computational application limits of man and his(her) inability to properly harness the computational power of the GPU.
The average coder is much more interested (lazy) in buying large expensive CPU machines (clusters) and doing some simple MPI or OpenMP where his (her) code doesn't have to change that much and he(she) doesn't have to think around the hardware.
I believe (hope) that there will be significant interest from the business (analyst) sector, hopefully as much as there has been from the academic sector.
GPGPU needs to catch up on the software side. I've just seen too many applications go from CPU code to GPU code and get 20-30+ speedup on the SAME MACHINE just by utlitzing the already existing GPU.
Sometimes, HPC techs do get too overhyped.
While I think that Nvidia's CUDA is a good tool, there is academic opposition to it in places because it is not hardware independent. AMD's Firestream is still too in the "baby" phase to be of any real use and the "academics" who choose to use OpenGL(DX)/SL are fine, but this is not practical for the business world and so I doubt that approach will ever catch on.
This is all, of course, JMO.