cancel
Showing results for 
Search instead for 
Did you mean: 

Archives Discussions

ckotinko1
Adept I

How depth is interpolated in rasterizer?(linear depth instead of 1/z)

Hello.

I'm trying to implement linear depth to z-buffer mapping in my minecraft-like game. However, there is one big problem: I want to use early depth tests(conservative depth to be precise). As far as I know, these tests are implemented in fixed function hardware, and I cannot control the value used for early depth test. Instead, I can only hack-hack-hack clip coordinates produced by my vertex/geometry shader to emulate perspective projection. So, my question is: how exactly the hardware calculates and interpolates values(especially depth) for each pixel(gl_FragCoord.z)?

For example, texture coordinates: s/z, r/z, t/z are interpolated, and are divided by interpolated(1/z) for each pixel to obtain smoothly interpolated s,r,t. Question is: what is '1/z'? is it interpolated(gl_Position.z/gl_Position.w produced by vertex shader) or interpolated(1/gl_Position.w) or or interpolated(1/gl_Position.z)?

What kind of interpolation is used to obtain fragment depth used for early depth tests?

Thanks.

0 Likes
1 Solution

According to OpenGL specs (particularly 3.3 core specs, because I have it opened now) equation (3.10) of paragraph 3.6 says:

However, depth values for polygons must be interpolated by: z = a*z0 + b*z1 + c*z2 (3.10)

where z0, z1 and z2 are the depth values of vertices, and z is the depth of fragment, all in normalized device coordinates, and a, b and c are barycentric coefficients (a + b + c = 1). So depth is indeed interpolated as if it was passed to fragment shader with noperspective keyword, and then shifted from -1..1 range to 0..1 window space to be stored in gl_FragCoord.z;

View solution in original post

0 Likes
3 Replies
gsellers
Staff

Hi,

You are correct that you cannot control the value used for early depth testing from the shader. That is the purpose of early depth - that the depth test occurs before running the shader and so by definition, the shader cannot have any effect on the test.

Also, the hardware does not interpolate s/z, t/z etc. It interpolates s/w, t/w and so on, along with z/w and 1/w. At each pixel, it computes the reciprocal of 1/w (retrieving w) and multiplies each of the interpolants by that in order to produce perspective correct values. That is the purpose of homogenous coordinates.

I'm not sure what you're trying to achieve. If you want the input to the early depth test to be linear, you'll need to hack your projection matrices to remove the perspective projection and possibly send 1/w into your fragment shader to do the correction yourself. If you simply want some values to be linearly interpolated in screen space regardless of the perspective projection, you can use the noperspective keyword.

Cheers,

Graham

Thanks for your response.

I want to have linear depthbuffer precision, because i want to render big scenes. No logarithmic precision and no tricks with multiple passes if possible. I already perform perspective correction in my pixel shader, however, I haven't found any information about interpolation of gl_FragCoord.z. Is this value perspective-corrected in interpolator(if corrected, how?) or is used as-is for depth test? In other words, what is a exact formula for depth value used for early depth tests?

Regards,
Dmitry

0 Likes

According to OpenGL specs (particularly 3.3 core specs, because I have it opened now) equation (3.10) of paragraph 3.6 says:

However, depth values for polygons must be interpolated by: z = a*z0 + b*z1 + c*z2 (3.10)

where z0, z1 and z2 are the depth values of vertices, and z is the depth of fragment, all in normalized device coordinates, and a, b and c are barycentric coefficients (a + b + c = 1). So depth is indeed interpolated as if it was passed to fragment shader with noperspective keyword, and then shifted from -1..1 range to 0..1 window space to be stored in gl_FragCoord.z;

0 Likes