cancel
Showing results for 
Search instead for 
Did you mean: 

OpenCL

szpeter
Journeyman III

CL_MAP_FAILURE when mapping 4GB buffer

I got this error when trying to map any buffer above or equal to 4GB on my 8GB RX580, and the issue is not because of insufficient VRAM because just 3.99GB still works fine. The environment I am using is Windows, with the latest Radeon driver.

The issue does not occur on my NVIDIA card. I wonder what cause be the problem?

0 Likes
1 Solution
german
Staff

In order to perform a map of gpu memoy in invisible heap runtime has to allocate GPU accessible system memory, copy data from GPU and return the pointer to system memory. Current logic uses OS VidMM to allocate system memory, but it has a limit of around 4GB per allocation, hence the map failed. 
Most likely nvidia's implementation works, because it allocates system memory without VidMM just plain malloc() call and then pins it in chunks to bypass VidMM limit around 4GB. AMD runtime will switch to that method in the upcoming releases. Meantime the app would have to split the map with an offset and smaller size.

View solution in original post

6 Replies
dipak
Big Boss

Thank you for reporting this. Could you please provide a reproducible test-case and attach the clinfo output?

Thanks.

0 Likes

Here are the code sample.

```cpp

std::vector<cl::Platform> platforms;
cl::Platform::get(&platforms);
std::vector<cl::Device> devices;
for (auto& platform : platforms)
{
      std::vector<cl::Device> platformDevice;
      platform.getDevices(CL_DEVICE_TYPE_GPU, &platformDevice);
      std::copy(platformDevice.cbegin(), platformDevice.cend(), std::back_inserter(devices));
}
auto& device = devices[0]; //The amd card here
cl::Context context{ device };
cl::CommandQueue queue{ context };

//const auto size = 4ull * 1024 * 1024 * 1024; //4GB -> CL_MAP_FAILURE, code -12
const auto size = 4ull * 1024 * 1024 * 1024 - 1ull * 1024 * 1024; //4GB - 1MB, OK
cl::Buffer buf{ context, CL_MEM_READ_ONLY, size };
queue.enqueueMapBuffer(buf, true, CL_MAP_READ, 0, buf.getInfo<CL_MEM_SIZE>());

```

0 Likes

Also my clinfo:

Platform Name: AMD Accelerated Parallel Processing
Number of devices: 1
Device Type: CL_DEVICE_TYPE_GPU
Vendor ID: 1002h
Board name: Radeon RX 580 Series
Device Topology: PCI[ B#38, D#0, F#0 ]
Max compute units: 36
Max work items dimensions: 3
Max work items[0]: 1024
Max work items[1]: 1024
Max work items[2]: 1024
Max work group size: 256
Preferred vector width char: 4
Preferred vector width short: 2
Preferred vector width int: 1
Preferred vector width long: 1
Preferred vector width float: 1
Preferred vector width double: 1
Native vector width char: 4
Native vector width short: 2
Native vector width int: 1
Native vector width long: 1
Native vector width float: 1
Native vector width double: 1
Max clock frequency: 1396Mhz
Address bits: 64
Max memory allocation: 7073274265
Image support: Yes
Max number of images read arguments: 128
Max number of images write arguments: 64
Max image 2D width: 16384
Max image 2D height: 16384
Max image 3D width: 2048
Max image 3D height: 2048
Max image 3D depth: 2048
Max samplers within kernel: 16
Max size of kernel argument: 1024
Alignment (bits) of base address: 2048
Minimum alignment (bytes) for any datatype: 128
Single precision floating point capability
Denorms: No
Quiet NaNs: Yes
Round to nearest even: Yes
Round to zero: Yes
Round to +ve and infinity: Yes
IEEE754-2008 fused multiply-add: Yes
Cache type: Read/Write
Cache line size: 64
Cache size: 16384
Global memory size: 8589934592
Constant buffer size: 7073274265
Max number of constant args: 8
Local memory type: Scratchpad
Local memory size: 32768
Max pipe arguments: 16
Max pipe active reservations: 16
Max pipe packet size: 2778306969
Max global variable size: 6365946624
Max global variable preferred total size: 8589934592
Max read/write image args: 64
Max on device events: 1024
Queue on device max size: 8388608
Max on device queues: 1
Queue on device preferred size: 262144
SVM capabilities:
Coarse grain buffer: Yes
Fine grain buffer: Yes
Fine grain system: No
Atomics: No
Preferred platform atomic alignment: 0
Preferred global atomic alignment: 0
Preferred local atomic alignment: 0
Kernel Preferred work group size multiple: 64
Error correction support: 0
Unified memory for Host and Device: 0
Profiling timer resolution: 1
Device endianess: Little
Available: Yes
Compiler available: Yes
Execution capabilities:
Execute OpenCL kernels: Yes
Execute native function: No
Queue on Host properties:
Out-of-Order: No
Profiling : Yes
Queue on Device properties:
Out-of-Order: Yes
Profiling : Yes
Platform ID: 00007FFC5DB00000
Name: Ellesmere
Vendor: Advanced Micro Devices, Inc.
Device OpenCL C version: OpenCL C 2.0
Driver version: 3110.7
Profile: FULL_PROFILE
Version: OpenCL 2.0 AMD-APP (3110.7)

0 Likes

Thank you for providing the above information. We'll look into this and get back to you shortly.

Thanks.

0 Likes

It's been some time now. Any updates? Or can you reproduce the issue?

0 Likes
german
Staff

In order to perform a map of gpu memoy in invisible heap runtime has to allocate GPU accessible system memory, copy data from GPU and return the pointer to system memory. Current logic uses OS VidMM to allocate system memory, but it has a limit of around 4GB per allocation, hence the map failed. 
Most likely nvidia's implementation works, because it allocates system memory without VidMM just plain malloc() call and then pins it in chunks to bypass VidMM limit around 4GB. AMD runtime will switch to that method in the upcoming releases. Meantime the app would have to split the map with an offset and smaller size.