But I get different values compared to the CPU implementation.
If I understand right, what you want to do is simply convert coordinates from one space (destination image) to another space (source image).
int DestX = 10;
int DestY = 10;
You should normalise the coordinates so you can turn them into src image coords
int DestWidth = 200;
int DestHeight = 200;
float NormX = DestX / DestWidth;
float NormY = DestY / DestHeight;
Then convert to source-image space
int SrcWidth = 100;
int SrcHeight = 100;
NormX *= SrcWidth;
NormY *= SrcHeight;
Now your destination coordinates are in screen space.
float4 SrcColour = ReadImage( SrcImage, NormX, NormY );
WriteImage( DestImage, DestX, DestY, SrcColour );
This would be easy peasy to make into an opencl kernel.
As for the bilinear filtering, the ReadImage() would take care of that when sampling sub-pixels. (eg. 5.3f x 5.3f )
Yeah got it, quite a dumb question in fact.
Sorry for the bother and thanks!