cancel
Showing results for 
Search instead for 
Did you mean: 

Archives Discussions

thokra
Adept II

No output with core profile

Hi people!
I can't figure out what's wrong with my program. More specifically, I'm not sure if I've stumbled over a bug in the driver or if I'm using incorrect code.
My applicaition log reports:

[10:25:29]: Detected Linux x86_64

[10:25:29]: Detected Kernel 2.6.38-8-generic

[10:25:29]: Detected OpenGL vendor ATI Technologies Inc.

[10:25:29]: Detected OpenGL version 3.2.10665 Core Profile Context

[10:25:29]: Detected OpenGL renderer ATI Radeon HD 4800 Series

[10:25:29]: Detected GLSL version 3.30
My project is completely shader centric and at present I'm only using the following shaders, non-indexed geometry and 3.2 compat/core profiles:
#version 150
// predefined inputs from application
uniform mat4 p_ModelViewMatrix;
uniform mat4 p_ProjectionMatrix;
in vec3 p_Vertex;

// outs
out vec3 p_Vertex_Out;

void main()
{
  p_Vertex_Out = p_Vertex;
  gl_Position = p_ProjectionMatrix * p_ModelViewMatrix * vec4(p_Vertex, 1.0);
}
NOTE: The predef. uniforms are correctly modified as reported by gDEBugger. The attribute p_Vertex is bound to index 0. p_Vertex_Out simply passes the world-space position of the vertex to the geometry stage.

#version 150
layout(triangles) in;
layout(triangle_strip, max_vertices = 3) out;

in vec3 p_Vertex_Out[];
out vec3 color;

void main()
{
  for(int i = 0; i < gl_in.length(); i++)
  {
    // assign each vertex its world-space position as color and interpolate
    color = p_Vertex_Out{i}; // forum screws up with correct brackets!
    gl_Position = gl_in{i}.gl_Position; // forum screws up with correct brackets!
    EmitVertex();
  }

  EndPrimitive();
}

 
#version 150
in vec3 color;

layout(location = 0) out vec4 p_FragColor;

void main()
{
  p_FragColor = vec4(color, 1.0);
}
The program compiles and is successfully validated. gDEBugger reports that no deprecated funtionality is used. Compat gives me the expected output but when using the core profile no fragments seem to be generated. The application performs flawlessly on the latest NVIDIA Linux and Windows drivers - on the current Linux (11.4 pre in 11.04 Natty) and Windows Catalyst (11.3) it does not. If this part of AMD's GL implementation is platform independent, then I suspect a driver bug.
Anyone got something on this?
Thomas

0 Likes
8 Replies
frali
Staff

Maybe you use some deprecated functions in your application but not in the shader. If it can't be resolved, could you please send your program to me by frank.li@amd.com? Thanks.

0 Likes
thokra
Adept II

Hey Frank, thanks for your reply!

After throwing out GLEW, which still generates an INVALID_ENUM when calling glewInit() while using the core profile, I think I narrowed the problem down to the following:

void Geometry::render() const
{
// forum screws up with lower/greater than
const GLvoid* normal_offset_ptr = reinterpret_cast'<'const GLvoid*'>'(normal_offset_);
glBindBuffer(GL_ARRAY_BUFFER, vbo_handle_);

// choose index 0 for vertices, index 1 for normals
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, normal_offset_ptr);
glDrawArrays(GL_TRIANGLES, 0, vertex_count_);
}

Both invocations of glVertexAttribPointer throw an INVALID_OPERATION although no basis for that is given in the spec. Both attribute arrays are of course enabled and the vbo_handle_ is properly generated and NOT zero. Am I missing something?

Thomas
0 Likes

Do you enable the vertex attribute by glEnableVertexAttribArray(0)/(1) outside?

0 Likes

Another problem in geometry shader, it's maybe a typo.

color = p_Vertex_Out; => color = p_Vertex_Out [ i ];

0 Likes
thokra
Adept II

The second "error" is the forum throwing out the subscripts. So no error there.

For question one: I did both with no effect. First I enabled the arrays immediately before the draw call and would disable them after the call was done. At present, arrays 0 and 1 are enabled by default and permanently bound to the predefined vertex shader inputs "p_Vertex" and "p_Normal". If no normals are supplied with the geometry they will be generated on the fly. My test models all have proper normals with them.

Is it good practice to disable the arrays when drawing one object is done or is my second approach more preferable?

Anyway both work fine in compat, but don't in core.

Thomas


0 Likes

It's hard to detect what's wrong in a piece of code. Could you please send your program to me by frank.li@amd.com? Thanks.

0 Likes

Hello everyone.

0 Likes

Do you need to have a Vertex Array Object with the Core Contexts? I was having the same kind of issue and then I looked at the gl errors with GL_AMD_debug_output and saw

 

glVertexAttribPointer in a Core context called without a bound Vertex Array Object [which is now required for Core Contexts]. (GL_INVALID_OPERATION)

glDrawArrays in a Core context called without a bound Vertex Array Object [which is now required for Core Contexts]. (GL_INVALID_OPERATION)

 

In my simple example I just added

 

    GLuint vao;

    glGenVertexArrays(1, &vao);

    glBindVertexArray(vao);

 



and it started to work. 

0 Likes