Is this using vector data types or scalar data types?
Implicit conversions are supported on scalar datatypes so something like min(<int>,<uint> ) should be allowed., However, according to 6.2.1, "implicit conversions between built-in vector data types are disallowed", so something like min(<int4>,<uint4>) should throw an error.
Basically, it happens when I do something like the attached.
#define foo 5 //Causes error on Intel and Nvidia compilers, not AMD #define foo 5u //Works unsigned int bar = 12; unsigned int baz = min(foo, bar);
I think this is may be an edge case, because I don't know if C defines how literals should be cast.
unsigned int foo = 12;
In this case, the literal 12 is a signed int so an implicit typecast should take place. However, the compiler could also notice there are no side effects in this assignment, so it could just assume you meant 12u.
Hmm I understand now....I wasn't thinking it through.
I guess the problem arises from the fact that min is a common function that can take many different datatypes...
...the compiler does not actually know what to implicitly convert to (convert both to uint or convert both to int). I guess it doesn't matter with a value of 5, but for a value of greater than 0x80000000, it would have a difference on the output of function.
I guess the AMD compiler must automatically assume that when passing a literal value (5 in this case) with no explicit data type, the data type is the same as the first argument. Because the value stored is identical for both unsigned and signed data types it works. It would be interesting to see if you change the value to something greater than 0x80000000 whether or not it would throw an error.
I'm not sure which assumption is incorrect :-/
This will be fixed in an upcoming release.