AnsweredAssumed Answered


Question asked by savage309 on Apr 25, 2015
Latest reply on Apr 30, 2015 by nou


I know that this topic has been upraised a lot, but I want to do that one more time ..

I am dealing with some GPGPU apps and I need to make them run fast on all kind of hardware, so for a quite some time now I am doing a bit in-depth research about the hardware and what it does exactly.

I have more experience with CUDA and nVidia GPUs (since nVidia OpenCL implementation is really bad), but I am more and more interested in the AMD GPUs, since I can see so much great potential in them, that is not being used properly today at our side.

I am trying to find if there is any difference between the SIMD model in GCN 1.2 and the SIMT model (in let say, Maxwell), or the SIMT is just a marketing buzz word used by nVidia (honestly, I don't see any much of a difference; if there is any it has to be in the way branching is handled). If there is difference, how does all this compares to the Intel GPUs ?


Further more, we lack good video lectures on GCN (or at least I can't find any; on the other side, we have the Stanford nVidia lectures which are quite good). The GCN white paper also could use a bit of refining (I am not hardware expert, but I have read quite a few white papers and I have some view on hardware, but at some point it got me lost).


Thanks !