Hello.

I'm trying to implement linear depth to z-buffer mapping in my minecraft-like game. However, there is one big problem: I want to use early depth tests(conservative depth to be precise). As far as I know, these tests are implemented in fixed function hardware, and I cannot control the value used for early depth test. Instead, I can only hack-hack-hack clip coordinates produced by my vertex/geometry shader to emulate perspective projection. So, my question is: **how exactly the hardware calculates and interpolates values(especially depth) for each pixel(gl_FragCoord.z)?**

For example, texture coordinates: s/z, r/z, t/z are interpolated, and are divided by interpolated(1/z) for each pixel to obtain smoothly interpolated s,r,t. Question is: what is '1/z'? is it interpolated(gl_Position.z/gl_Position.w produced by vertex shader) or interpolated(1/gl_Position.w) or or interpolated(1/gl_Position.z)?

What kind of interpolation is used to obtain fragment depth used for early depth tests?

Thanks.

According to OpenGL specs (particularly 3.3 core specs, because I have it opened now) equation (3.10) of paragraph 3.6 says:

where z0, z1 and z2 are the depth values of vertices, and z is the depth of fragment, all in normalized device coordinates, and a, b and c are barycentric coefficients (a + b + c = 1). So depth is indeed interpolated as if it was passed to fragment shader with noperspective keyword, and then shifted from -1..1 range to 0..1 window space to be stored in gl_FragCoord.z;