My software is able to run on both CPU and GPU, and on the GPU I got some stranges results that seems related to floating point accuracy.
Notice that I have no problem on the CPU, because computations values stay on the FPU as 64 bits registers, and rounding is done with more precision, it is automatic on the CPU.
My code seems correct and I can fix some problems by using 'double' instead, sometimes... but it remains some problems I can't fix on the GPU
Does someone has ever got some problems of this kind ? and have you find some solutions ?