2nd question today, but more or less unrelated to the first one.
I have an application, that needs to use almost the complete memory that the GPU offers (up to 5 MByte). The driver reports (via CL_DEVICE_GLOBAL_MEM_SIZE and CL_DEVICE_GLOBAL_FREE_MEMORY_AMD ) that enough memory is available and also the application obeys the wish not to exceed CL_DEVICE_MAX_MEM_ALLOC_SIZE. The total memory is split into two or three buffers, not necessarily the same size.
Still running the card in Linux it works fine, while in Windows 10 I see the driver starts to swap out memory to the host system once I try to touch the last ~256 MByte - and this slows down the execution extremely.
In both cases the GPU does no graphics out - this is done via an other device.
Question: is there any undocumented environment variable or other trigger to turn of GPU memory virtualization completely so it forces the buffer to be placed on device?
Thanks very much