Hello, everybody! I have a question concerning mipmapping. I use OpenGL and load textures' mipmaps (which are stored in some magic file format) manually using glTexImage2D for each of mipmaps. It is guaranteed that my magic file contains all required levels of mipmaps, and their dimensions follow the rules of half-size reduction. Howhether, the problem is that all those images are stored in file in reverse order - from smallest image (mipmap level n) to the largest (base image, level 0). But I do not want to seek through file searching for appropriate mipmap to reverse order of uploading if it is not necessary - I just want to load them in order, in which they are stored in file - starting from highest level (level n, 1x1 dimensions). The question is: is this approach of reverse order of mipmap uploading legal? It does work on my configuration, but can I be sure about other implementations?
It depends your minimum target hardware.
On ATI HD glGenerateMipmaps will create upper mipmap levels. Before ATI HD it may or not and you might want to create each level manually with garbage prior to send data in the order that suits you.
One more time: I DO HAVE all required levels of mipmaps prepared and ready for uploading. The question is: does the order in which individual mipmaps will be uploaded (using glTexImage2D) matter? I mean, is it equally legal to upload mipmaps from largest image to smallest and viceversa? The order of uploading is not pointed in official OpenGL documentation, but everywhere I have seen so far everybody are loading mipmaps from largest to smallest (base image going first), so should I expect surprise, if I will try to upload images beginning from highest level to smallest?
May this technical question clarify the problem: how and when texture memory is allocated for texture's mipmaps? Is it an individual memory chunk for each of mipmaps and every level has a pointer to corresponding memory location, or all mipmaps are stored consecutively in common array (and that's why there is so many restrictions about power-of-two dimensions and internalFormat must be the same)?
Well, I do not see reason why texture borders are not working properly on most of implementations - may be, just because nobody use them?..
Consider following example:
Texture A has mipmaps 4x4, 2x2, 1x1;
Texture B has mipmaps 4x2, 2x1, 1x1;
So, in case texture array for all mipmaps is a single memory chunk (what I worry about), and if we are loading textures in reverse order (from mipmaps 1x1), than for both textures A and B there will be the same base image size expected (right after loading first mipmap 1x1) - it is 4x4, and the same size of memory will be allocated for both arrays, however, as we know, texture B require less memory as it has smaller height. Only after loading subsequent mipmaps, the actual size of the memory needed could be precisely determined. So what will happen than? Part of allocated memory for not-rectangular textures will just be unused? Or, what is worst, mipmap's data will mix?
I'm just about to implement approach of reverse texture mipmaps storing in my own format of file, and I really scared to get unexpected texture corruption or memory leaks or smg else on "some configurations"... It seems to work properly on my computer (no video - just Intel Core i3), even with not-rectangular textures, but...
then load all mipmaps level into memory and then load it to OpenGL in normal order.
i can say that driver is loading texture to GPU AFTER first use of that texture for actual drawing. it was tested on ATI.
Hm, smart solution... If so, I think mipmap uploading order really doesn't matter for ATI cards. I hope I will be able to finish my "investigation" on nvidia forum, however their community not as responsible as this... Thank you for help - I really appreciate this!
regardless of potential issues with driver implementation, it is indeed better to start from the base level to get the best performance with texture upload. (the textures are stored packed in memory, not independently)
I do not see reason why texture borders are not working properly on most of implementations
and it is not in dx11, so having special (complex) hardware to optimize a case which nobody really uses is difficult to justify.