I am the developer of the supermodel emulator (sega model 3 arcade emulator). I've stumbled on an issue that seems to effect all AMD cards, making the project completely unusable on them, even on the highest top of the range cards. Currently I am testing with a WX 7100 pro card. For comparison sake the code works flawlessly on Intel integrated GPUs ... I've checked the opengl code itself with AMD codexl and it doesn't find any issues (api violations).
I am creating an FBO with 3 texture attachments (RGBA8), and 1 depth/stencil. I am drawing the base texture (first texture like this)
void main()
{
vec4 colBase = texture2D( tex1, fsTexCoord);
if(colBase.a < 1.0) discard;
gl_FragColor = colBase;
}
Anyway doing this causes some kind of slow path inside the driver.
Comparison screenshots so you can see the issue
Look at the GPU usage and frame rate .. 40fps at 80% GPU usage
https://i.imgur.com/RTySusy.png
If i simply blit the FBO to the back buffer like so
glBindFramebuffer(GL_READ_FRAMEBUFFER, m_frameBufferID);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, m_width, m_height, 0, 0, m_width, m_height, GL_COLOR_BUFFER_BIT, GL_LINEAR);
Performance looks like this 20% GPU usage at 60fps
https://i.imgur.com/xBQPsPP.png
I can't use the second (blitting) path because it is missing critical effects that we need for correct emulation.
This is the full source code for the frame buffer emulation
Supermodel / Code / [r775] /trunk/Src/Graphics/New3D/R3DFrameBuffers.cpp
Basically I am drawing opaque pixels to one buffer, and alpha pixels to 2 different layers, which are then composited together at the end of the frame.