I ran into this issue on both an old HD4890, as well as on a R9 290, Catalyst 14.12, Win7, so it has been around for some time.
Rendering into a multisampled FBO (tried both RGBA8 and RGBA fp16). GL_SAMPLE_ALPHA_TO_COVERAGE is enabled, the fragment shader writes the coverage value to alpha. After resolving the MSAA'ed FBO, the color channels behave as expected (the expected dithered patterns on the parts with fractional coverage values). The alpha channel is problematic. GL_SAMPLE_ALPHA_TO_ONE is enabled, and that should set the destination alpha to 1 regardless of the coverage alpha supplied by the shader. However it looks like this is completely ignored by the driver. The resulting alpha in the FBO is the coverage value, not 1. The same code works as expected on NVidia hardware (resulting alpha is always 1).
Is this a known issue ?
Does anyone have any kind of information about this issue ? Is this the right place to post about it ?
After several more tests, we are now highly confident that this is a driver issue. We have tested several AMD cards of different generations, with the respective latest drivers, and every single one shows this issue. The respective Piglit tests (sample-alpha-to-one tests) all consistently fail. Nvidia cards are not affected.
This is a problem for us. We're using the feature in a deferred image based LOD system. While we have a partial workaround, it is neither free (requiring an additional GBuffer pass), nor is it fully effective. The results are noticeable image quality differences between AMD and NV cards, which is hard to explain to our customers. I'm also having a hard time understanding how a GL core feature can be broken for so long. Surely it must have been flagged during AMDs internal QA / compliance testing ?
Any information would be highly appreciated.