cancel
Showing results for 
Search instead for 
Did you mean: 

Archives Discussions

swoop
Adept I

Problems with OpenGL FBO shared packed_depth_stencil depth format

GL_DEPTH_STENCIL with GL_UNSIGNED_INT_24_8

Radeon 5870 Driver: 12.10

Im using this format to rebuild view space position in a shader and the values being read dont seem to be correct. I have FBO_1 with depth, and two color attachments and a second FBO_2 sharing the depth and using its own two color attachments.

I render to all targets of FBO_1, depth only first, then to the two color attachments. After this depth writes are disabled and only testing is done. When writing to the two color attachments on FBO_2, the depth texture (shared on this FBO) is sampled by the shader. The shader reconstructs view space positions from the sampled depths. Sampling from this texture while rendering to FBO_2's color attachments seems to cause problems.

I get errors in my lighting and fog calculations witch all reconstruct view xyz position or z from the depth. The problems all go away and everything works as expected if I switch to FLOAT data type and DEPTH only. I dont get this problem on other hardware using either format.

Is anyone aware of issues related to this format in the current drivers with a similar setup?

0 Likes
5 Replies
fscan
Journeyman III

Just a thought, maybe you need to disable stencil testing too?

I have a similar setup with my renderer but i just copy the depth/stencil after rendering to fbo1 is done to fbo2 with glBlitFramebuffer. So i can use fbo1 depth buffer as texture and enable depth/stencil testing on fbo2.

0 Likes

Thanks for the suggestion, unfortunately stencil testing and writing is off. May sound weird to want stenciling support and not use it, but its only used sometimes depending on whats being rendered. I noticed the problem when stenciling wasnt being used.

The depth is shared because I need to test against it, but I also need to sample from it when rendering to the second FBO. Funny thing is, if I dont use the packed depth/stencil UINT24_8, I get the expected result across all my hardware with straight depth float. So Im pretty sure its something unique to this specific format. Even when the stencil portion is ignored.

0 Likes

have you tried using the explicit internal format (GL_DEPTH24_STENCIL8)? Whether or not this is the problem, i think it's always better to specify the internal format this way. also, don't forget to use GL_DEPTH_STENCIL_ATTACHMENT for the fbo.

0 Likes

I do have DEPTH24_STENCIL8 as the <internalformat>, and DEPTH_STENCIL as the <format>. I wasnt aware of a

GL_DEPTH_STENCIL_ATTACHMENT point at the time I wrote this a while ago, so Ive been setting it up according to the usage example in the packed_depth_stencil spec by calling glFramebufferTexture2D twice, once for the depth attachment and once for the stencil

glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, tex_depthstencil, 0);

glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_STENCIL_ATTACHMENT_EXT, GL_TEXTURE_2D, tex_depthstencil, 0);

I changed these calls to a single call that uses GL_DEPTH_STENCIL_ATTACHMENT, and although I still get framebuffer complete using a single call, the results are still incorrect. I use to test against my 5870, now I code on it, so I believe all this was running working code at one point on all my cards, nothing has changed on my end except for driver updates and at some point this started, although Im not exactly sure when.

I was just wondering, why do you use glBlitFramebuffer, seems you could render to fbo1 depth and share/attach to fbo2 and avoid the copy by making the depth a framebuffer texture (although that might put you in the same boat as me)

0 Likes

If i remember correctly the biggest reason is lazyness

When i was writing the code, my FBO "Manager" didn't understand the concept of shared depth buffers. I use a texture but only for *reading* the depth values from the previous stage. My (deferred) renderer is partitioned in stages, so when fbo2(effect buffer) is used, fbo1(gbuffer) is only read ... the blit is only necessary once per frame. Also, as the fbo2 depth buffer is never used in another stage, i use a renderbuffer for it .. i *think* this has better performance.

Also, i use the opengl3 core version, i don't know if there are any differences to the EXT version.

0 Likes