cancel
Showing results for 
Search instead for 
Did you mean: 

OpenGL & Vulkan

Journeyman III
Journeyman III

OpenGL sparse texture : memory leaking

I trying to implement sparse texture array.

I think I found a bug in the Amd driver, or i am doing something wrong.

On sparse array texture i get memory leaking:

Opengl memory leak on amd - YouTube

Texture format:

rgba8

Resolution:

1024 * 1024, 11 mip levels

Sourses:

Render/GLSparseTextureArray.h at master · vindast/Render · GitHub 

Render/GLSparseTextureArray.cpp at master · vindast/Render · GitHub 

Min exe, which produces this bug:

https://yadi.sk/d/VOjT0uZ8uhRdWw

System requirements: opengl 4.5, avx2

 

I found two scenarios:

case 1:

press button "create sparseTexture buffer"

enable "bDrawOnAllocation"

press button "create texture"

-> memory leak occur after glTexPageCommitmentARB and acces to texture in shader

case 2:

press button "create sparseTexture buffer"

press button "create texture"

press button "load texture"

-> memory leak occur after glTexPageCommitmentARB and try to update 0 mip level by glTexSubImage3D without acces to texture in shader

important information:

bug occur for layer count more then 512. (i get 2048 for GL_MAX_SPARSE_ARRAY_TEXTURE_LAYERS_ARB and GL_MAX_SPARSE_ARRAY_TEXTURE_LAYERS_AMD)

for 512 not occur, if i have one GLSparseTextureArray object

for 513 occurs.

No gl errors get by glGetError().

Tested on:

rx 480 8gb, last driver : memory leak (win 10)

AMD Vega 8, new driver :  loaded, no memory leak, driver shutdown after a while (win 10)

nv gtx850m - ok, old driver (win 8.1)

GTX 1050 - ok, new driver

I think, my problem same like this:

OpenGL ARB_sparse_texture (Driver crash, texture issues) 

Sorry for the bad English

Tags (1)
0 Kudos
Reply
2 Replies
Staff
Staff

Re: OpenGL sparse texture : memory leaking

Thank you for reporting it. I've moved this post to AMD OpenGL forum. Also, I have whitelisted you for AMD Devgurus community.

Thanks.

Journeyman III
Journeyman III

Re: OpenGL sparse texture : memory leaking

udt
Min c++ code, which produces this bug:

int
w = 1024;
int h = 1024;
int n = CL::max(w, h);
int numLods = 1;
while (n >>= 1)
{
numLods++;
}
int numLayers = 0;
glGetIntegerv(GL_MAX_SPARSE_ARRAY_TEXTURE_LAYERS, &numLayers);
GLenum target = GL_TEXTURE_2D_ARRAY;
GLenum internalFormat = GL_RGBA8;
GLenum format = GL_RGBA;
GLenum type = GL_UNSIGNED_BYTE;
char* SysMem = new char[w * h * 4];
int numPageSizes = 0;
glGetInternalformativ(target, internalFormat, GL_NUM_VIRTUAL_PAGE_SIZES_ARB, 1, &numPageSizes);
assert(numPageSizes > 0);
std::cout << "numPageSizes = " << numPageSizes << std::endl;
int* pageSizesX = new int[numPageSizes];
int* pageSizesY = new int[numPageSizes];
glGetInternalformativ(target, internalFormat, GL_VIRTUAL_PAGE_SIZE_X_ARB, numPageSizes, pageSizesX);
glGetInternalformativ(target, internalFormat, GL_VIRTUAL_PAGE_SIZE_Y_ARB, numPageSizes, pageSizesY);
int pageSizeIndex = -1;
for (int i = 0; i < numPageSizes; i++)
{
if ((w % pageSizesX[i]) == 0 && (h % pageSizesY[i]) == 0)
{
pageSizeIndex = i;
break;
}
std::cout << "x = " << pageSizesX[i] << ", y = " << pageSizesY[i] << std::endl;
}
std::cout << "pageSizeIndex = " << pageSizeIndex << std::endl;
assert(pageSizeIndex != -1);
GLuint id = -1;
glGenTextures(1, &id);
assert(id > 0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(target, id);
glTexParameteri(target, GL_TEXTURE_SPARSE_ARB, GL_TRUE);
glTexParameteri(target, GL_VIRTUAL_PAGE_SIZE_INDEX_ARB, pageSizeIndex);
glTexStorage3D(target, numLods, internalFormat, w, h, numLayers);
int testCount = 10;
for (int i = 0; i < testCount; i++)
{
glTexPageCommitmentARB(
target,
0, // lod
0, // x
0, // y
0, // layer
w,
h,
1,
GL_TRUE);

glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

glTexSubImage3D(
target,
0, // lod
0, // x
0, // y
0, // layer
w,
h,
1,
format,
type,
SysMem
);
}
0 Kudos
Reply