cancel
Showing results for 
Search instead for 
Did you mean: 

Archives Discussions

Raistmer
Adept II

Memory leak - why??

With this code I had memory leak that eated all host memory in few minutes:

if(gpu_data)
clReleaseMemObject(gpu_data);

gpu_data = clCreateBuffer(context, CL_MEM_READ_WRITE | CL_MEM_COPY_HOST_PTR, fft_len*sizeof(float)*2, data, &err);
if(!gpu_data)
{
fprintf(stderr,"ERROR: clCreateBuffer failed, gpu_data\n");
}

and

if(gpu_chirps)
clReleaseMemObject(gpu_chirps);
gpu_chirps= clCreateBuffer(
context,
CL_MEM_READ_WRITE | CL_MEM_USE_HOST_PTR,
sizeof(fftwf_complex)*state.fft_len*state.dm_chunk_small,
chirp,
&err);
if(err != CL_SUCCESS)
{
fprintf(stderr,"Error: clCreateBuffer (gpu_chirp)\n");

}

And I have no memory leak when allocate them once and use write buffer:

err|=clEnqueueWriteBuffer(cq,gpu_data,CL_TRUE,0,fft_len*sizeof(float)*2,data,0,NULL,NULL);

clEnqueueWriteBuffer(cq,gpu_chirps,CL_TRUE,0,sizeof(fftwf_complex)*state.fft_len*state.dm_chunk_small,
chirp,0,NULL,NULL);

The question is:
why there was memory leak ?
buffers should be deleted before new allocation, right?
0 Likes
7 Replies
genaganna
Journeyman III

Originally posted by: Raistmer With this code I had memory leak that eated all host memory in few minutes: if(gpu_data) clReleaseMemObject(gpu_data); gpu_data = clCreateBuffer(context, CL_MEM_READ_WRITE | CL_MEM_COPY_HOST_PTR, fft_len*sizeof(float)*2, data, &err); if(!gpu_data) { fprintf(stderr,"ERROR: clCreateBuffer failed, gpu_data\n"); } and if(gpu_chirps) clReleaseMemObject(gpu_chirps); gpu_chirps= clCreateBuffer( context, CL_MEM_READ_WRITE | CL_MEM_USE_HOST_PTR, sizeof(fftwf_complex)*state.fft_len*state.dm_chunk_small, chirp, &err); if(err != CL_SUCCESS) { fprintf(stderr,"Error: clCreateBuffer (gpu_chirp)\n"); } And I have no memory leak when allocate them once and use write buffer: err|=clEnqueueWriteBuffer(cq,gpu_data,CL_TRUE,0,fft_len*sizeof(float)*2,data,0,NULL,NULL); clEnqueueWriteBuffer(cq,gpu_chirps,CL_TRUE,0,sizeof(fftwf_complex)*state.fft_len*state.dm_chunk_small, chirp,0,NULL,NULL); The question is: why there was memory leak ? buffers should be deleted before new allocation, right?


Raistmer,

             Could you please give test case for memory leaks?

0 Likes
Raistmer
Adept II

Please, specify what you mean. I gave exact lines of code that were and that I changed to get rid of memory leak.
Do you need complete source? Or kernels that were called over these buffers? I can give complete source if needed, no prob it's open source, but it seems it's impractical to post it in forum...
0 Likes

Originally posted by: Raistmer Please, specify what you mean. I gave exact lines of code that were and that I changed to get rid of memory leak. Do you need complete source? Or kernels that were called over these buffers? I can give complete source if needed, no prob it's open source, but it seems it's impractical to post it in forum...


Please send to streamdeveloper@amd.com mail id if it is very big.

0 Likes
Raistmer
Adept II

Done.
0 Likes

Originally posted by: Raistmer Done.


Raistmer,

             Please send instructions to run your application sent streamdeveloper mail id. We are waiting for your reply

0 Likes
Raistmer
Adept II

Just sent. If you still have troubles with building this app, please, let me know.
If needed I can just provide executables...

EDIT: oops, sorry, sent to dev central instead of streamdeveloper , will forward in few mins.
0 Likes

calling clRelease is not the same as calling free on the host; it merely decrements the reference count to the buffer. When the reference count reaches zero AND all enqueued tasks referencing the buffer complete, then the runtime is free to deallocate the buffer.

However, it looks like you're using blocking writes, which implicitly flushes the queue and runs until the write completes. Is there a kernel running or something that takes a long time while you allocate the buffer for the next round or what have you? If so, put a clFinish after it.

0 Likes