cancel
Showing results for 
Search instead for 
Did you mean: 

Archives Discussions

james1
Journeyman III

Problem when rendering with a single VBO containing only GL_BYTE data

If I try to render using a single VBO which only contains GL_BYTE or GL_UNSIGNED_BYTE data the input to my vertex shader seems to be constantly 0 regardless of the data i have put into the VBO (though on an Nvidia card the data shows up as expected)

If I create a 2nd, unused, dummy VBO attach it to the VAO's vertex attribute description and fill with GL_FLOAT data then the input to the shader appears as expected.

For example if i have the shader:

#version 420 core

layout(location = 5) in unsigned int inByte;

out vec3 colour;

void main()

{

    if(inByte==0)

    {

        colour = vec3(1,0,0);

        gl_Position = vec4(-0.5, -0.5, 0 ,1.0);

    }

    else if(inByte==1)

    {

        colour = vec3(0,1,0);

        gl_Position = vec4(0.5, -0.5, 0 ,1.0);

    }

    else

    {

        colour = vec3(0,0,1);

        gl_Position = vec4(0.0, 0.5, 0 ,1.0);

    }

}

and i try glDrawArrays(GL_TRIANGLES, 0, 3); using the VAO:

glBindVertexArray(vao);

glBindBuffer(GL_ARRAY_BUFFER, ab_byte);

glVertexAttribIPointer(5, 1, GL_UNSIGNED_BYTE, sizeof(unsigned char), BUFFER_OFFSET(0));

glEnableVertexAttribArray(5);

nothing shows up. (ab_byte is filled with unsigned char [] = {0,1,2})

however changing the vao to this

glBindVertexArray(vao);

glBindBuffer(GL_ARRAY_BUFFER, ab_byte);

glVertexAttribIPointer(5, 1, GL_UNSIGNED_BYTE, sizeof(unsigned char), BUFFER_OFFSET(0));

//dummy buffer

glBindBuffer(GL_ARRAY_BUFFER, ab);

glVertexAttribPointer(0, 1, GL_FLOAT, GL_FALSE, sizeof(float), BUFFER_OFFSET(0));

glEnableVertexAttribArray(5);

glEnableVertexAttribArray(0);

(where ab is filled with float [] = {0,0,0}, and the shader and draw call are unchanged)

allows me to draw the triangle as originally intended even though the data in ab is never used.

Is this an issue with ATI cards? or have i misunderstood the spec?

Thanks for any help.

(I am using an A10-4655M APU with Radeon 7620G if that helps)

0 Likes
7 Replies
gsellers
Staff

Hi,

This could be related to a known issue with not having a vertex attribute bound to location 0. I noticed that your shader with only the uint attribute has it at location 5. Try moving it to location 0 and see if that works. If it does, that's a known issue with a fix that will be released in an upcoming driver. Otherwise, let us know and we'll take a deeper look.

Cheers,

Graham

0 Likes

Hello Graham,

It appears you are right, using location 0  fixed it. Is there a release date for the fixed drivers?

Thanks for the help,

James

0 Likes

Is it better to create a new thread for this problem? I can upload the code if that would help.

0 Likes
james1
Journeyman III

i just noticed that if i create my profile using:

WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_COMPATIBILITY_PROFILE_BIT_ARB,

it works, but with

WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_CORE_PROFILE_BIT_ARB

It doesn't work (both profiles work on NV) could this be a bug?

edit:  to clarify, using the fix where I used location 0 instead of 5 things work as expected with the compatibility profile, but with the core profile no rendering occurs for pure GL_BYTE VBO data, the rest of the scene draws as expected.

0 Likes
james1
Journeyman III

I tested a little further,

Using location 0, a VBO filled with const char [] = {0,1,2} and glVertexAttribIPointer(0, 1, GL_BYTE, sizeof(char), BUFFER_OFFSET(0));

Nothing renders in Core profile

but changing the VBO to be filled with const int [] = {0,1,2} and glVertexAttribIPointer(0, 1, GL_INT, sizeof(int), BUFFER_OFFSET(0)); it renders as expected in core profile.

Both versions render in compatibility profile.

0 Likes

Thanks for the extra info. This seems like it might be an unfortunate interaction between two or more problems. We'll try to reproduce this locally and get a fix produced.

Cheers,

Graham

0 Likes

i confirm that the same bug happened to me in a bit different way. i wasn't trying to fill attribute array with GL_BYTES, i've jsut tried to use it for ELEMENT_ARRAY. topic here:

https://www.opengl.org/discussion_boards/showthread.php/181049-glDrawElements-access-violation-on-AM...

it also describes that gDebugger doesn't work properly with AMD cards. it doesn't ever report an actual error, it always stops with Access Violation on any OpenGL error.

and two other possible bugs:

blitting doesn't work with GL_DEPTH_COMPONENT32/24/16, you are forced to use GL_DEPTH24_STENCIL8.

and the weirdest one - SwapBuffer causes Access Violation, according to gDebugger. i have no idea, why, cause it doesn't report actual error. nothing like that with gDebugger on nVidia card.

0 Likes