I have a Radeon HD 5830 wich, according specifications here
http://www.amd.com/us/products/desktop/graphics/ati-radeon-hd-5000/hd-5830/Pages/hd-5830-overview.aspx#2
can perform double-precision operations but why running CLInfo from samples in SDK i obtain 0 as preferred vector width for double? and why for max clock frequency there is a 0?
Can it depend frome runtime being OpenCL 1.1? (I use the ATI Stream SDK 2.2 with Ubuntu 10.04 and Catalyst 10.10). Can it depend from ATI not supporting the optional standard double extension but a proprietary one?
Device Type: CL_DEVICE_TYPE_GPU
Device ID: 4098
Max compute units: 14
Max work items dimensions: 3
Max work items[0]: 256
Max work items[1]: 256
Max work items[2]: 256
Max work group size: 256
Preferred vector width char: 16
Preferred vector width short: 8
Preferred vector width int: 4
Preferred vector width long: 2
Preferred vector width float: 4
Preferred vector width double: 0
Max clock frequency: 0Mhz
I programmed an application using doubles and I haven't neither error nor warning about rounding to single precision and the applications seems to run correcty so this is little weird but since even the clock isn't correct I suppose that also vector width double indication can't be totally reliable.
In this post
http://forums.amd.com/devforum/messageview.cfm?catid=390&threadid=142709&enterthread=y
a double supporting Nvidia GPU returns 1 (sorry for this reference but it was already in this forum) for preferred size.