Hi,
for an algorithm working on an image pixel by pixel, is there an actual advantage if one uses a e.g. two dimensional NDRange for the global and local threads rather than a one dimensional NDRange?
For example the global thread number could be a 2 dimensional (image.width, image.height) NDRange or with image.width*image.height a one dimensional.
I pack my image data to a one dimensional array with size image.width*image.height, so i would have to do an additional computation on the GPU to get my array index from the 2 dimensional global NDRange.
Did I miss any possible advantage that comes with e.g. a 2-dimensional NDRange in my particular case?
Thank you in advance,
Greetings,
steve