I ran into a weird problem: I want to compress my normal vectors, so I converted them to signed bytes from floats.
It turned out that the FPS slows down insanely (7FPS instead of 42) when I use char instead of float data type.
I also tested the same thing with ANGLE project. In Direct3D the same thing works correctly: I got the same FPS with reduced memory usage.
I do it with VBO's. What do I do wrong? o.O
Is the GL_BYTE, tupleSize:3, Normalized=true thing bad on OpenGL? If so, is there a solution to this?
Solved! Go to Solution.
This 32bit align was it, thx!
I've tried byte, and byte too: they'r bad also.
Only byte and GL_INT_10_10_10_2 worked perfectly.
So from now I'll mix the vertex data with the normals, and compress them onto short. Hope that will do fine on any kind of hardware.