With the latest drivers available at the moment (17.10), I'm able to allocate about 3.5GB of memory on Windows 10 64-bit on R290x with 4 GB RAM in one go. Yet, for some reason, clGetDeviceInfo(...CL_DEVICE_MAX_MEM_ALLOC_SIZE...) reports about 2.5 GB on Ubuntu 16.04 Linux with 17.40 drivers, which is the latest version at the moment as I understand it.
This said, is there a way for me to alloc a single chunk that is at least 3 GB on Linux? Most links and posts online lead me to some cryptocoin mining stuff, recommending to set various environment variables to get past this limit, for example,
export GPU_FORCE_64BIT_PTR=0
export GPU_MAX_HEAP_SIZE=100
export GPU_USE_SYNC_OBJECTS=1
export GPU_MAX_ALLOC_PERCENT=100
export GPU_SINGLE_ALLOC_PERCENT=100
I'm not mining, but still, setting these env variables to the suggested values doesn't seem to help - I am consistently getting ~2.5GB limit reported by clGetDeviceInfo.
I suppose I could split one large buffer into two smaller ones to get past this limitation, but that would introduce all sorts of #ifdef-s and so on into the code, which is acceptable but not ideal.
So, can I somehow alloc a single 3.5 GB chunk on Linux? Is it at all possible? What should I try and what should I do?
Thank you in advance.
sp