The benchmark fails to start, the translation of gl_Identity is incorrect,
UMAD TEMP[0], SV[0].xxxx, TEMP[0].xxxx, TEMP[1]
is translated to:
temp0[0] = vec4(uintBitsToFloat((gl_InstanceID * floatBitsToUint(temp0[0].xxxx) + floatBitsToUint(temp0[1]))));
Which results in the following error:
shader failed to compile
0:23(34): error: could not implicitly convert operands to arithmetic operator
0:23(34): error: operands to arithmetic operators must be numeric
0:23(17): error: no matching function for call to `uintBitsToFloat(error)'; candidates are:
It seems we can use the same workaround as gl_VertexID, I didn't
observe any regression running various gl_InstanceId tests from
piglit:
temp0[0] = vec4(uintBitsToFloat((floatBitsToUint(vec4(intBitsToFloat(gl_InstanceID))) * floatBitsToUint(temp0[0].xxxx) + floatBitsToUint(temp0[1]))));
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>