Hi I'm new to GPGPU programming so it might be a little too difficult for me to get started quickly. I would like to see how much time it takes for a single 5870 to do, say 1,000,000,000,000 double precision (or other float point format) float point operations. Any short / simple code good for reference as a start?
If you need to benchmark gpu ( not necessarily ati's opencl ) then run CAL++ peekflop example ( http://sourceforge.net/projects/calpp/ ). On the other hand it's almost straightforward to convert peekflop to opencl.