How dos integer shifts work? "Itermediate Language (IL) v2.0" says that ISHL,ISHR,USHR "shift each element in src0 by the unsigned integer bit count provided by the lowest five bits of the corresponding elements in src1. According to GPU ShaderAnalyzer ISHL translates into RV600 LSHL instruction. "R600-Family Instruction Set Architecture" claims that "LSHL_INT - Scalar logical shift left. Zero is shifted into the vacated locations. Src1 is interpreted as an unsigned integer. If Src1>31, then the result is 0x0." Please explain, are the lowest 5-bits of Src1 used as a shift counter or the whole 32-bit register?
Thank you, Micah for your answer. Could you comment this situation, please:
I need to perform integer a=b-c, I found no ISUB, so I use negate modifier:
iadd r0.x,vObjIndex0.x,cb0[0]_neg(x)
The GPU ShaderAnalyzer shows that this will be compiled to:
0 y: SUB_INT ____, 0.0f, KC0[0].x
1 x: ADD_INT R1.x, R1.x, PV0.y
I see a substraction from zero, followed by addition! Why not just SUB_INT one operand from another? Is there another way to perform a substraction with single ALU op?