AnsweredAssumed Answered

problems using large amounts of global memory

Question asked by dukeleto on Jun 27, 2014
Latest reply on Jul 18, 2014 by dukeleto

I've finally decided to try to understand/correct a problem I've been having for quite a while now:

being able to use most of the global memory (RAM) on the GPU for a computation.


Here's a description of the problem:

- my code is an in-house 2D wave propagation solver; it declares a number of cl_mem arrays

(23 to be precise) and performs operations on them

- the code "works": results quantitatively match reference data obtained from legacy fortran code

- the largest array size that works is about 3,500,000 doubles, which gives, for the full code,

  3.5e6*23*(8 bytes) = (roughly) 640 Mbytes

- above this size, computer hangs, without leaving any useful messages in the logs (that I could find)

- latest beta linux driver, running on 13.10 ubuntu; same symptoms with slightly older drivers

- I've tried setting GPU_MAX_ALLOC_PERCENT to various values, to no avail (found in this thread)

- I've tried setting  GPU_FORCE_64BIT_PTR to 1, also with no luck


Can anyone guess what might be going on? Am I missing something obvious to allow

a decent fraction of the total memory to be used?