cancel
Showing results for 
Search instead for 
Did you mean: 

OpenCL

dukeleto
Adept I

problems using large amounts of global memory

I've finally decided to try to understand/correct a problem I've been having for quite a while now:

being able to use most of the global memory (RAM) on the GPU for a computation.

Here's a description of the problem:

- my code is an in-house 2D wave propagation solver; it declares a number of cl_mem arrays

(23 to be precise) and performs operations on them

- the code "works": results quantitatively match reference data obtained from legacy fortran code

- the largest array size that works is about 3,500,000 doubles, which gives, for the full code,

  3.5e6*23*(8 bytes) = (roughly) 640 Mbytes

- above this size, computer hangs, without leaving any useful messages in the logs (that I could find)

- latest beta linux driver, running on 13.10 ubuntu; same symptoms with slightly older drivers

- I've tried setting GPU_MAX_ALLOC_PERCENT to various values, to no avail (found in this thread)

- I've tried setting  GPU_FORCE_64BIT_PTR to 1, also with no luck

Can anyone guess what might be going on? Am I missing something obvious to allow

a decent fraction of the total memory to be used?

Thanks!

0 Likes
Reply
6 Replies
dukeleto
Adept I

Re: problems using large amounts of global memory

I have pared down my code to just declarations and a single ultra-simple kernel.

With this super-simple code, on a workstation with a 6GB 7970 running the latest

driver,

- everything works fine if my arrays total less than about 640 Mbytes

- above that size, copying from one gpu array to another (with clEnqueueCopyBuffer) works,

  but launching the test kernel segfaults on the clenqueuendrangekernel call.

At this point, might I ask that someone from AMD at least state whether I should be able to

use more memory? I can send the simple code base if that can help any checking.

Thanks,

Olivier

denis
Adept I

Re: problems using large amounts of global memory

Hi Olivier,

Can you tell me more about your problem? When you allocate the buffers everything is wroking fine?

I have a problem that appear only after I extend a buffer and I reach now a total size of 582400368 bytes allocates and I have now very stranges behavior sometimes the driver crash.

I have an 7970 with 3GB of memory . Is there a limit of 600MB on 3GB available??

0 Likes
Reply
dukeleto
Adept I

Re: problems using large amounts of global memory

Hi Denis,

I haven't checked extensively how much I can *allocate* with no problem. I will be

able to check that tomorrow at work. In any case with my very simple test, the

allocation works ok beyond 600 mbytes, but the first call to a skeleton kernel

results in a segfault, attributed by gdb to the clenqueuendrangekernel call.

I will update this when I have had time to perform additional tests.

Maybe someone from AMD will have commented by then?

Olivier

0 Likes
Reply
sudarshan
Staff
Staff

Re: problems using large amounts of global memory

Hi Olivier,

Large buffer allocation should not be  a problem as it is routinely done by many users (You can find couple of discussions on large memory buffers on Large buffers and Max memory allocation Restriction).

Can you check max memory allocation (given by clinfo) on your machine.

Also how are you passing all the buffers to the kernel? Have you experimented with allocating a single buffer of size > 640MB and passing it to a simple kernel?

Thanks,

-Sudarshan

denis
Adept I

Re: problems using large amounts of global memory

Hi Olivier,

I made more test about the amount of memory allocated and it seem not there my problem , I allocate a maximum of about 300 MB for a single buffer allocation .

0 Likes
Reply
dukeleto
Adept I

Re: problems using large amounts of global memory

Hi Sudarshan,

thanks for the response. I am indeed able to allocate a single large buffer and pass it to a kernel,

I will therefore try to build up from there, to see where my problem was/is coming from .

Thanks,

Olivier

0 Likes
Reply