I'm currently toying around with GPU voxelization using OpenGL. I've implemented a simple algorithm to voxelize a scene into a 3D textures mip 0 and then run a compute shader that takes mip 0 as input and computes further mip levels of the texture by averaging the values in the 3D texture one mip level at a time (using ImageTextures to bind the level). The voxels are then displayed by simply drawing an instanced cube for each entry in a mip level and accessing the 3D texture to decide if the cube should be drawn and in which color.
This works perfectly fine for the first mip-level.
Unfortunately it seems that either not the full memory for all mip levels is allocated by the driver or the compute shader that computes the mip levels is not invoked for every entry in the respective mip-level. For mip 0 all entries exist correctly in the texture, for mip 1 only ~half of the entries exist, for mip 2 no entry seems to be set.
I have gone over my code multiple times and normally I would think the error is on my side, but I also tested the same code on an NV 980 Ti and it works flawless there.
I attached an archive with a more or less minimal implementation, that shows the issue, just enable Display Voxels in the GI tab of the Menu and switch between the levels.
If it helps I can also paste parts of the code here, but as it is a bit of a complex program I guess it would be not very helpful.
I tried to debug the issue with RenderDoc (which crashes when loading the dump, I guess because of the ARBbindless extension I'm using) and CodeXL. CodeXL gives an interesting result as it does indeed display all of the mip levels for the texture but stretches the expected output values as if the textures lower mips were bigger as they actually are.
I tried a multitude of different sizes for the 3D texture pow2 and others but with the same outcome.
I'm currently running a Vega64 Liquid with the Radeon Crimson 17.11.2 drivers.