2 Replies Latest reply on Aug 7, 2015 4:51 AM by realhet

    glVertexAttribPointer() normalized byte[3] is much slower than float[3]

    realhet

      Hi!

       

      I ran into a weird problem: I want to compress my normal vectors, so I converted them to signed bytes from floats.

      It turned out that the FPS slows down insanely (7FPS instead of 42) when I use char[3] instead of float[3] data type.

      I also tested the same thing with ANGLE project. In Direct3D the same thing works correctly: I got the same FPS with reduced memory usage.

       

      I do it with VBO's. What do I do wrong? o.O

      Is the GL_BYTE, tupleSize:3, Normalized=true thing bad on OpenGL? If so, is there a solution to this?

       

      Thank You!