- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
AMD Radeon HD 5700 Series OpenGL Driver Issues
Hey guys, I'm working on some OpenGL code and I noticed some peculiarities on one of my machines which don't happen on any of the other ones. After messing around with my code I managed to get it working but I have no idea why the changes I made caused it to work, and how to detect this other than hardcoding the renderer's name, so I decided to ask here.
The GPU in this machine is the ATI Radeon HD 5770 Specs | TechPowerUp GPU Database
GL_VERSION 4.5.13417
GL_RENDERER AMD Radeon HD 5700 Series
GLSL 440
I've also tested using a core, core and forward compat, compatibility and default OpenGL profile.
Here's what I'm doing each frame:
- I upload vertex data to a VertexBuffer using glMapBufferRange with GL_MAP_WRITE_BIT | GL_MAP_INVALIDATE_BUFFER_BIT | GL_MAP_FLUSH_EXPLICIT_BIT
- I then flush the range of data which was mapped using glFlushMappedBufferRange
- I unmap the buffer
- One of the attributes of the vertex data is an int specifying the texture index within the shader.
- I bind and activate a series of textures, up to the number specified by MAX_TEXTURE_IMAGE_UNITS which on this driver is 18.
- In the fragment shader, I index a uniform array of samplers to determine which texture the vertex uses.
- uniform sampler2D textures[TEXTURE_COUNT]
When performing this I get the result in the attached video. Random flickering, and sometimes the last texture being drawn is not displayed. I'd like to stress that I have checked the parameters that go into the functions stated below and have tested them on a variety of machines and GPUs including the Mesa3D llvmpipe, and therefore do not believe my ranges are wrong, but rather that I'm making a wrong assumption somewhere.
I got the scene to render properly by using only "GL_MAP_WRITE_BIT" and by limiting my activated textures (and uniform array size) to 1. The driver specifies that it supports both the GL_ARB_gpu_shader5 extension (which I've tried to request) and I'm using GLSL 440 therefore uniform indexing of opaque types should work. Additionally, I do not see why the invalidate buffer and flush explicit bits would cause problems either, considering the stated driver support.
I would greatly appreciate it if someone could shed some insight on this.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Just an update in case anyone stumbles onto this. The working theory is that invalidating the buffer while it is being drawn causing the driver to garbage collect it (despite it being used). This behavior is very weird (and makes INVALIDATE_BUFFER_BIT useless). I've since worked around this issue.
The setting the array size to 1 part of the problem is due to indexing sampler arrays with a varying being illegal under the OpenGL spec. Some drivers support it despite that, while others fail with an error along the lines of "sampler arrays indexed with non-constant expressions are forbidden in GLSL 1.30 and later". It seems this driver in particular (and therefore possibly other drivers) fails silently.
There is no way to detect this conventionally, but I discovered that if you try to compile an OpenGL ES shader (#version 300 es) with a varying index, on this driver, it will fail with the appropriate error, which you can then use to substitute your shader with one using a for loop or switch case, in order fool it into using a varying index.
While technically up to spec, I consider it reporting successful compilation as a driver bug. Of course, the driver itself is practically ancient so I expect no support for it.
Cheers
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Further testing shows that the removal of the "GL_MAP_INVALIDATE_BUFFER_BIT" rather than the explicit blush bit (plus setting the array size to 1) is what fixes the issue. I would've thought it to be the other way around. It makes this even weirder. Also there's absolutely nothing coming through in the DebugCallback.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Just an update in case anyone stumbles onto this. The working theory is that invalidating the buffer while it is being drawn causing the driver to garbage collect it (despite it being used). This behavior is very weird (and makes INVALIDATE_BUFFER_BIT useless). I've since worked around this issue.
The setting the array size to 1 part of the problem is due to indexing sampler arrays with a varying being illegal under the OpenGL spec. Some drivers support it despite that, while others fail with an error along the lines of "sampler arrays indexed with non-constant expressions are forbidden in GLSL 1.30 and later". It seems this driver in particular (and therefore possibly other drivers) fails silently.
There is no way to detect this conventionally, but I discovered that if you try to compile an OpenGL ES shader (#version 300 es) with a varying index, on this driver, it will fail with the appropriate error, which you can then use to substitute your shader with one using a for loop or switch case, in order fool it into using a varying index.
While technically up to spec, I consider it reporting successful compilation as a driver bug. Of course, the driver itself is practically ancient so I expect no support for it.
Cheers