Now that the gl_*Matrix is deprecated we need to supply the shaders with our own matrices via uniforms. So far so good.

Imagine I have the following part of a vertex shader:

...

uniform mat4 my_projection_mat;

uniform mat4 my_modelview_mat;

attribute vec3;my_vertex_pos...

gl_Position = (my_projection_mat * my_modelview_mat) * vec4(my_vertex_pos, 1.0);

...

My question is: Will the AMD driver perform the "my_projection_mat * my_modelview_mat" multiplication for every vertex? Or it will understand that this calculation needs to be done only once?

write simple test with one matrix and then two matrix and compare results.

EDIT: i tried write simple vertex shader into GPU Shader Analyzer.

attribute vec4 pos;

uniform mat4 proj;

uniform mat4 model;

uniform mat4 view;

void main() {

gl_Position = model * pos;

}

it return code with four instruction slots

if i add one matrix to model * proj * pos; for example it return much longer code. 33 instruction slots

but if i change it to model * (proj * pos) it return only 8 instruction slots and each added matrix add four instruction slots. but only if i enclose it into parenthese

view * (model * (proj * pos)).