cancel
Showing results for 
Search instead for 
Did you mean: 

OpenCL

neworderofjamie
Adept I

OpenCL compiler bug

I've been working on adding OpenCL support to our code generator (GitHub - genn-team/genn: GeNN is a GPU-enhanced Neuronal Network simulation environment based on cod... ) and the generated code is now working on NVIDIA, Intel and ARM devices but we've been having ongoing issues getting this to work on AMD GPUs. With new enough Adrenaline drivers and a relatively modern GCN GPU, all now seems good on Windows but, in order to do some more rigorous testing in-house, we have now bought a Radeon 5700 XT for one of our Linux machines. Now, using the AMD GPU PRO 20.30 drivers we're seeing similar broken behavior. 

I have reduced a simple case to the attached minimal reproduction which, on NVIDIA and Intel hardware, prints out:

1,1,0,0

However, on the 5700 XT, it prints out:

0,0,0,0

Similarly to our previous issue, if you add any printf to any kernel, it works correctly. Additionally, interspersing commandQueue.flush() calls between kernel launch has the same effect however, from my understanding of the OpenCL spec, this should not be necessary.

0 Likes
38 Replies
dipak
Staff

Thank you for reporting the above issue and providing the reproducible test-case. We will look in this and get back to you. Please attach the clinfo out.

Thanks.

0 Likes
neworderofjamie
Adept I

Thanks for your rapid response. clInfo output for the AMD device is as follows:

Platform Name AMD Accelerated Parallel Processing
Platform Vendor Advanced Micro Devices, Inc.
Platform Version OpenCL 2.1 AMD-APP (3143.9)
Platform Profile FULL_PROFILE
Platform Extensions cl_khr_icd cl_amd_event_callback cl_amd_offline_devices
Platform Host timer resolution 1ns
Platform Extensions function suffix AMD

Platform Name AMD Accelerated Parallel Processing
Number of devices 1
Device Name gfx1010
Device Vendor Advanced Micro Devices, Inc.
Device Vendor ID 0x1002
Device Version OpenCL 2.0 AMD-APP (3143.9)
Driver Version 3143.9 (PAL,LC)
Device OpenCL C Version OpenCL C 2.0
Device Type GPU
Device Board Name (AMD) AMD Radeon RX 5700 XT
Device Topology (AMD) PCI-E, 8f:00.0
Device Profile FULL_PROFILE
Device Available Yes
Compiler Available Yes
Linker Available Yes
Max compute units 20
SIMD per compute unit (AMD) 4
SIMD width (AMD) 32
SIMD instruction width (AMD) 1
Max clock frequency 2100MHz
Graphics IP (AMD) 10.10
Device Partition (core)
Max number of sub-devices 20
Supported partition types None
Max work item dimensions 3
Max work item sizes 1024x1024x1024
Max work group size 256
Preferred work group size (AMD) 256
Max work group size (AMD) 1024
Preferred work group size multiple 32
Wavefront width (AMD) 32
Preferred / native vector sizes
char 4 / 4
short 2 / 2
int 1 / 1
long 1 / 1
half 1 / 1 (cl_khr_fp16)
float 1 / 1
double 1 / 1 (cl_khr_fp64)
Half-precision Floating-point support (cl_khr_fp16)
Denormals No
Infinity and NANs No
Round to nearest No
Round to zero No
Round to infinity No
IEEE754-2008 fused multiply-add No
Support is emulated in software No
Single-precision Floating-point support (core)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations Yes
Double-precision Floating-point support (cl_khr_fp64)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Address bits 64, Little-Endian
Global memory size 8573157376 (7.984GiB)
Global free memory (AMD) 8306688 (7.922GiB)
Global memory channels (AMD) 8
Global memory banks per channel (AMD) 4
Global memory bank width (AMD) 256 bytes
Error Correction support No
Max memory allocation 7287183769 (6.787GiB)
Unified memory for Host and Device No
Shared Virtual Memory (SVM) capabilities (core)
Coarse-grained buffer sharing Yes
Fine-grained buffer sharing Yes
Fine-grained system sharing No
Atomics No
Minimum alignment for any data type 128 bytes
Alignment of base address 2048 bits (256 bytes)
Preferred alignment for atomics
SVM 0 bytes
Global 0 bytes
Local 0 bytes
Max size for global variable 6558465280 (6.108GiB)
Preferred total size of global vars 8573157376 (7.984GiB)
Global Memory cache type Read/Write
Global Memory cache size 16384 (16KiB)
Global Memory cache line size 64 bytes
Image support Yes
Max number of samplers per kernel 16
Max size for 1D images from buffer 134217728 pixels
Max 1D or 2D image array size 2048 images
Base address alignment for 2D image buffers 256 bytes
Pitch alignment for 2D image buffers 256 pixels
Max 2D image size 16384x16384 pixels
Max 3D image size 2048x2048x2048 pixels
Max number of read image args 128
Max number of write image args 64
Max number of read/write image args 64
Max number of pipe args 16
Max active pipe reservations 16
Max pipe packet size 2992216473 (2.787GiB)
Local memory type Local
Local memory size 65536 (64KiB)
Local memory syze per CU (AMD) 65536 (64KiB)
Local memory banks (AMD) 32
Max number of constant args 8
Max constant buffer size 7287183769 (6.787GiB)
Preferred constant buffer size (AMD) 16384 (16KiB)
Max size of kernel argument 1024
Queue properties (on host)
Out-of-order execution No
Profiling Yes
Queue properties (on device)
Out-of-order execution Yes
Profiling Yes
Preferred size 262144 (256KiB)
Max size 8388608 (8MiB)
Max queues on device 1
Max events on device 1024
Prefer user sync for interop Yes
Number of P2P devices (AMD) 0
P2P devices (AMD) <printDeviceInfo:144: get number of CL_DEVICE_P2P_DEVICES_AMD : error -30>
Profiling timer resolution 1ns
Profiling timer offset since Epoch (AMD) 1600456210444955455ns (Fri Sep 18 20:10:10 2020)
Execution capabilities
Run OpenCL kernels Yes
Run native kernels No
Thread trace supported (AMD) Yes
Number of async queues (AMD) 4
Max real-time compute queues (AMD) 1
Max real-time compute units (AMD) 0
printf() buffer size 4194304 (4MiB)
Built-in kernels
Device Extensions cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_fp16 cl_khr_gl_sharing cl_khr_gl_depth_images cl_amd_device_attribute_query cl_amd_media_ops cl_amd_media_ops2 cl_khr_image2d_from_buffer cl_khr_subgroups cl_khr_gl_event cl_khr_depth_images cl_khr_mipmap_image cl_khr_mipmap_image_writes cl_amd_copy_buffer_p2p

0 Likes

I was able to reproduce the issue on Windows with the latest driver. So, I'm not sure if this is related to the other issue you mentioned (https://community.amd.com/thread/254777 ). Did you try it on Windows? 

I ran the code with CodeXL and found some interesting findings. It doesn't seem a compilation issue. From the CodeXL profiler report, it looks like the buffer reading is happening before all the kernels finish their execution. As a result, we are getting zeros values. This seems to be a sync issue here.

Here are the related code section:

CHECK_OPENCL_ERRORS(updatePresynapticKernel.setArg(0, d_mergedPresynapticUpdateGroup0));
CHECK_OPENCL_ERRORS(commandQueue.enqueueNDRangeKernel(updatePresynapticKernel, cl::NullRange, globalWorkSize, localWorkSize));

CHECK_OPENCL_ERRORS(updateNeuronsKernel.setArg(0, d_mergedNeuronUpdateGroup1));
CHECK_OPENCL_ERRORS(commandQueue.enqueueNDRangeKernel(updateNeuronsKernel, cl::NullRange, globalWorkSize, localWorkSize));

// Copy output back to host
CHECK_OPENCL_ERRORS(commandQueue.enqueueReadBuffer(d_xPost, CL_TRUE, 0, 4 * sizeof(float), xPost));

The way output buffer "d_xPost" is indirectly accessed in the kernel, this buffer seems to be independent of two other buffers which are passed to the kernels i.e. "d_mergedPresynapticUpdateGroup0" and "d_mergedNeuronUpdateGroup1".  As a result,  enqueueReadBuffer also seems to be independent of these two kernels, so, it can be independently executed without affecting the program consistency. I guess, that is the case here.

From the CodeXL profiler report, it seems that all the commands were submitted to the queue in the same order as mentioned in the program, but on the device, enqueueReadBuffer was actually executed before the above two kernels completed. However, both the kernels were executed in order. I think, the below description in the AMD OpenCL optimization guide might be helpful to explain this behavior.

It is best to use non-blocking commands to allow multiple commands to be queued before the command queue is flushed to the GPU. This sends larger batches of commands, which amortizes the cost of preparing and submitting work to the GPU. Use event tracking to specify the dependence between operations. It is recommended to queue operations that do not depend of the results of previous copy and map operations. This can help keep the GPU busy with kernel execution and DMA transfers. Command execution begins as soon as there are commands in the queue for execution.

If commandQueue.flush() is added before the buffer reading, it looks like all these above commands are following the program order. 

Also, when enqueueReadBuffer(d_mergedNeuronUpdateGroup1 ,..) was added before reading the output buffer "d_xPost" without any flush() call, the output was as expected.

Regarding the printf behavior, it seems that some implicit synchronizations are added when kernels contain printf statements, so we are getting the expected output.

Could you please try the CodeXL profiler to see if you have similar findings?

Thanks.

Thank you so much for your detailed investigation of this issue. You're right that this does indeed occur on Windows and that part of the problem was down to the read occuring too early (which is very counter-intuitive on an in-order queue but still). I entirely failing to get CodeXL or Radeon Compute Profiler to work but I added profiling events into the queue and you can see that, without any additional flushes, the readXPostEventStart happens before any kernels (long numbers are times of profiling events and everything is sorted by time):

762101045076768(mapXPostEventStart)
762101045111522(mapSpkCntPreEventStart)
762101045303778(fillXPostEventStart)
762101045307418(fillXPostEventEnd)
762101045307778(fillInSynEventStart)
762101045308098(fillInSynEventEnd)
762101045312138(buildNeuronKernelEventStart)
762101045314218(buildNeuronKernelEventEnd)
762101045317818(buildPresynapticKernelEventStart)
762101045319858(buildPresynapticKernelEventEnd)
762101045794578(writeSpkCntPreEventStart)
762101045800058(writeSpkCntPreEventEnd)
762101045924489(readXPostEventStart)
762101045924729(updatePresynapticEventStart)
762101045927369(readXPostEventEnd)
762101045930089(updatePresynapticEventEnd)
762101045930449(updateNeuronsEventStart)
762101045930769(updateNeuronsEventEnd)

However, when I add a single flush event before reading (uncomment FLUSH_BEFORE_READ), the events are ordered correctly:

767847480970088(mapXPostEventStart)
767847481011538(mapSpkCntPreEventStart)
767847481308052(fillXPostEventStart)
767847481310852(fillXPostEventEnd)
767847481311212(fillInSynEventStart)
767847481311532(fillInSynEventEnd)
767847481315532(buildNeuronKernelEventStart)
767847481317012(buildNeuronKernelEventEnd)
767847481320372(buildPresynapticKernelEventStart)
767847481322012(buildPresynapticKernelEventEnd)
767847481509852(writeSpkCntPreEventStart)
767847481515172(writeSpkCntPreEventEnd)
767847481619399(updatePresynapticEventStart)
767847481624799(updatePresynapticEventEnd)
767847481625159(updateNeuronsEventStart)
767847481625479(updateNeuronsEventEnd)
767847481733447(readXPostEventStart)
767847481736487(readXPostEventEnd)

BUT the outut is still incorrect (0,0,0,0). Only when I add another flush between the two kernel launches (uncomment FLUSH_BEFORE_READ and FLUSH_BETWEEN_KERNELS) do I get the correct output (1,1,0,0):

767788672578162(mapXPostEventStart)
767788672615890(mapSpkCntPreEventStart)
767788672900801(fillXPostEventStart)
767788672905881(fillXPostEventEnd)
767788672906201(fillInSynEventStart)
767788672906561(fillInSynEventEnd)
767788672910561(buildNeuronKernelEventStart)
767788672912081(buildNeuronKernelEventEnd)
767788672915481(buildPresynapticKernelEventStart)
767788672917121(buildPresynapticKernelEventEnd)
767788673111441(writeSpkCntPreEventStart)
767788673116801(writeSpkCntPreEventEnd)
767788673223561(updatePresynapticEventStart)
767788673230921(updatePresynapticEventEnd)
767788673251235(updateNeuronsEventStart)
767788673253355(updateNeuronsEventEnd)
767788673358035(readXPostEventStart)
767788673361035(readXPostEventEnd)

Furthermore, attempting to replace the flush before the read with a wait on the update neurons kernel (uncomment WAIT_BEFORE_READ) seems to have no effect whatsoever on the ordering which is really bizarre:

825588782273194(mapXPostEventStart)
825588782305039(mapSpkCntPreEventStart)
825588782459757(fillXPostEventStart)
825588782462837(fillXPostEventEnd)
825588782463197(fillInSynEventStart)
825588782463517(fillInSynEventEnd)
825588782795957(buildNeuronKernelEventStart)
825588782797517(buildNeuronKernelEventEnd)
825588782801117(buildPresynapticKernelEventStart)
825588782802677(buildPresynapticKernelEventEnd)
825588782987157(writeSpkCntPreEventStart)
825588782992477(writeSpkCntPreEventEnd)
825588783142546(readXPostEventStart)
825588783144946(updatePresynapticEventStart)
825588783146266(readXPostEventEnd)
825588783150186(updatePresynapticEventEnd)
825588783150546(updateNeuronsEventStart)
825588783150866(updateNeuronsEventEnd)

The code with these additions is attached.

Thanks again for all your help

Jamie

dipak
Staff

Thank you for sharing the above information.

However, when I add a single flush event before reading (uncomment FLUSH_BEFORE_READ), the events are ordered correctly...

...
BUT the outut is still incorrect (0,0,0,0). Only when I add another flush between the two kernel launches (uncomment FLUSH_BEFORE_READ and FLUSH_BETWEEN_KERNELS) do I get the correct output (1,1,0,0).

In my case, when I added a single flush before the reading, I got the expected result. I tried it on Windows with some other card, so I'm not sure if that might be reason for this different  observations. I'll try the new attached code with CodeXL and check the execution order. Also, if needed, I will share these findings with the OpenCL team for their feedback.

Thanks.

0 Likes

Interesting - I forgot to add that we also reproduced the same behaviour on Windows with an Radeon RX 580 and 20.5.1 drivers

0 Likes

Also, similarly to using events, using clEnqueueBarrierWithWaitList before reading has no effect on the ordering which seems to directly go against description in the spec that it "waits for all commands previously enqueued in command_queue to complete before it completes" (https://www.khronos.org/registry/OpenCL/specs/opencl-1.2.pdf )

0 Likes

Thank you for sharing this interesting findings. Earlier I didn't check the code with the event objects, but I was under the impression that it should work if some explicit synchronization is added to order those commands. However, as you said above, you are still observing the issue with event object/barrier. In that case, I'll check with the OpenCL team for their feedback on this.

Thanks.

0 Likes

Indeed - while I can understand the potential optimisation benefits of scheduling buffer reads earlier than they are enqueued, ignoring synchronisation barriers and events seems like a bug to me.

0 Likes

Yes, I agree with you. I also thought that it's just an optimization that is pointed out in the optimization guide as "this can help keep the GPU busy with kernel execution and DMA transfers".

Anyway, let me check with OpenCL team. I believe they can provide more insights regarding this.

Thanks again for providing these valuable inputs.

Thanks.

0 Likes

I ran the latest attached code on my setup and got similar findings as you mentioned above. It indeed seems that synchronization using events has no effect on the ordering.

Also, as I checked with the OpenCL team. The code looks good to them and they have asked me to create a ticket to investigate the issue in detail. I'll create a ticket and include these testing results. I'll let you know if I've any update on this.

Thanks.

0 Likes

Good to hear you can reproduce. Does that mean you also require more than the single flush before reading to see correct results? Do you have any idea of a timescale on that ticket?

Thanks for all your continuing help with this!

0 Likes

 Does that mean you also require more than the single flush before reading to see correct results?

In my case, a single flush before the reading is enough to produce the correct result.

As I tried the macros, I observed below outputs and event orders:

  1. Default: (0, 0, 0, 0) -> readXPostEvent occurs before updatePresynapticEvent and updateNeuronsEvent 
  2. with FLUSH_BEFORE_READ: (1, 1, 0, 0) -> readXPostEvent occurs after updatePresynapticEvent and updateNeuronsEvent 
  3. with FLUSH_BETWEEN_KERNELS: (0, 0, 0, 0) -> readXPostEvent occurs after updatePresynapticEvent, but before updateNeuronsEvent 
  4. with WAIT_BEFORE_READ: (0, 0, 0, 0) -> readXPostEvent occurs before updatePresynapticEvent and updateNeuronsEvent 

I believe, a clFinish before the reading should work without any other clFlush. In that case, passing "CL_TRUE" to enqueueReadBuffer would be effectively a no-wait operation. I know, these approaches may not be as effective as event/barrier based synchronization, but they can be used as workaround till a fix is available.

Do you have any idea of a timescale on that ticket?

Sorry, it's difficult to provide any timeline at this moment. 

0 Likes

Sadly, a finish before the read results in correct ordering but incorrect output (like the single flush):

1007736281121096(mapXPostEventStart)
1007736281153370(mapSpkCntPreEventStart)
1007736281307106(fillXPostEventStart)
1007736281312386(fillXPostEventEnd)
1007736281312746(fillInSynEventStart)
1007736281313066(fillInSynEventEnd)
1007736281317066(buildNeuronKernelEventStart)
1007736281318546(buildNeuronKernelEventEnd)
1007736281322146(buildPresynapticKernelEventStart)
1007736281323546(buildPresynapticKernelEventEnd)
1007736281497466(writeSpkCntPreEventStart)
1007736281502786(writeSpkCntPreEventEnd)
1007736281599546(updatePresynapticEventStart)
1007736281604866(updatePresynapticEventEnd)
1007736281605226(updateNeuronsEventStart)
1007736281605586(updateNeuronsEventEnd)
1007736281726036(readXPostEventStart)
1007736281729036(readXPostEventEnd)

So the only workaround we currently have is to flush between every kernel launch which is very detrimental to performance. However, I totally understand with respect to the timeline, if you could keep me updated via this thread that would be great.

0 Likes
dipak
Staff

Sure, I'll let you know if I get any update about this issue.

a finish before the read results in correct ordering but incorrect output (like the single flush)

This is another unexpected behavior. Did you observe it on Windows or Linux? Please let me know your setup details. I'll mention this information in the related ticket.

I think it would really helpful if you can provide any profiler report for these cases i.e. with single clFlush or clFinish.

Thanks.

0 Likes

We can reproduce this on both a Linux system with a Radeon 5700 XT and GPU PRO 20.30 drivers; and a Windows system with a Radeon RX 580 and 20.5.1 drivers. If I can get the profiler to work, I'll post the results here.

0 Likes

Thanks for the information.

0 Likes

Just FYI.

It looks like more recent drivers are available for both Windows (Adrenalin 20.9.1 WHQL and 20.9.2 Optional) and Linux (AMDGPU-Pro 20.40). As it is always recommended to verify an issue with the latest drivers, I would suggest you to try those recent drivers to see if there is any different observations.

Please note, I tested with Adrenalin 20.9.1.

Thanks.

0 Likes

It's going to take us a little longer to get our Linux machine upgraded but, on Windows with a RX 580 and 20.9.2 drivers, the behaviour we see is unchanged i.e. a single flush or finish before the read does not result in correct results. What GPU are you testing on?

Thanks

0 Likes

Ok, the Linux machine with the 5700XT is upgraded to AMDGPU-Pro 20.40 and the behaviour is also unchanged.

0 Likes

Hi there,

Just wondering if there's been any progress on the OpenCL teams investigation of this issue?

Thanks

0 Likes

At this moment, I don't have any update that I can share with you. I'll let you know if I get any information on this issue.

Thanks.

0 Likes
german
Staff

I believe the app violates OCL1.2 spec. It's not allowed to pass the pointers from one kernel and reuse in another. It's not Cuda. Even in OCL2.0 SVM the app still has to pass the CL mem objects, hidden inside other buffers, for every kernel in clSetKernelExecInfo and only Fine-Grain System SVM doesn't require that.

buildNeuronUpdate1Kernel(__global struct MergedNeuronUpdateGroup1 *group, unsigned int idx, __global float* x ...){
   group[idx].x = x;

}

updateNeuronsKernel() {

   group->x[lid] = group->inSynInSyn0[lid];
}

0 Likes

Sorry for taking so long to respond to this thread, I was busy with other things and then got locked out of the new forum (thanks @dipak  for helping me with that). Could you clarify whether @german 's response is the outcome of the OpenCL team's investigation of this issue or just another opinion on this issue?

0 Likes

I work in OCL team and it was an outcome of my investigation. The app can't hide the pointers to some memory objects inside an arbitrary memory location and fetch them later in another kernel without a notification to OCL runtime about extra memory objects. It's a feature of Fine Grain System SVM.
Potentially the app can use OCL subbuffers, but make sure to send the parent buffer into the kernel that fetches pointers to subbuffers, even if a kernel doesn't have direct access to the parent.

0 Likes

Ok, that's good to know - I was always somewhat worried this approach violated the spec although I still don't really understand what's actually going wrong. The pointers we're 'hiding' are in device memory and it all works fine if you insert a lot of flushes so it's not like the pointers are actual somehow virtualized and thus not transferrable between kernels..

Additionally, none of the solutions I can think of are very satisfactory 😕 As I understand it, coarse-grained SVM would let us build the data structures we need but, we have no need for most data to be accessible from the host and really want to remain in control of copying data between host and device. I guess we could switch to HIP, where this kind of approach presumably works, but then we'd lose support for 90% of AMD hardware. Any suggestions would be much appreciated...

0 Likes

Runtime requires to know memory objects in order for MS Vidmm to work properly, also there are optimizations in runtime, which need the knowledge of all used memory objects. Much older HW(when OCL 1.0-1.2 was designed) wouldn't work even with flushes, because Windows required memory address patching upon a submission to HW. Flushes (but rather Finish) serialize execution, disabling some optimizations and/or changing timing for memory access.
I already mentioned a possible solution. The app can use subbuffers, but the original parent object must be included into the fetch kernel. However it's not really 100% robust solution and the app may still need clFlush() before and after the kernel that fetches the saved pointers, but at least OCL runtime will be able to pass proper usage information to MS Vidmm.

0 Likes

Thank you for the additional information. After reading the documentation on clCreateSubBuffer, can you clarify how I could use that in my case? Are you suggesting allocating a single large buffer which I pass to every kernel and then, from that, creating sub-buffers which I point to in the structs?

0 Likes

That's correct. However the things are a bit more complicated than that. On gfx10  HW(navi) it should work. On gfx9 HW it may still require extra clFlush(), since compiler can detect if the global resource has R/W access and then runtime could consider it as a nop. To avoid that the app may need some arbitrary single DWORD write to the global mem object in the kernel that has access to the hidden objects.

0 Likes

Apologies once again for taking such a long time to reply. I have now implemented your suggestion and, on the 5700 XT, I can now remove all the flushes and am getting competitive performance which is fantastic. However, I have encountered one issue. For simplicity, I wanted to allocate a single large buffer of CL_DEVICE_MAX_MEM_ALLOC_SIZE but, when I do this, the performance drops hugely until I reduce the buffer size a bit (around 4GB is fine - I haven't managed to find the exact threshold). Is this a known issue here or am I mis-understanding CL_DEVICE_MAX_MEM_ALLOC_SIZE? For reference, clInfo shows the following related values:

Global memory size (CL_DEVICE_GLOBAL_MEM_SIZE) 8573157376 (7.984GiB)

Max memory allocation (CL_DEVICE_MAX_MEM_ALLOC_SIZE) 7059013632 (6.574GiB)

Max size for global variable (CL_DEVICE_MAX_GLOBAL_ VARIABLE_SIZE) 6353112064 (5.917GiB)

Thanks again for all your help

 

Jamie

0 Likes

I can't say much without investigation. In general Windows doesn't support >4GB single allocation and runtime requires extra logic to handle that case, but the split is enabled even for much smaller allocations. Usually performance drop occurs if runtime can't fit the allocation inside device memory and it will fallback into system memory. Check memory monitor and see if something else consumes GPU memory on your system. 

0 Likes

Thanks for your rapid response. The allocation coming from system memory would totally explain this but, this is on Linux and CL_DEVICE_GLOBAL_FREE_MEMORY_AMD reports that there is 7.922 GiB free. I can try and make a minimal reproducible example if that helps? Also, on Windows, would the 32-bit limit be reflected in the CL_DEVICE_MAX_MEM_ALLOC_SIZE device info?

0 Likes

Yes, 32 bit binary should limit CL_DEVICE_MAX_MEM_ALLOC_SIZE.

0 Likes

So, on 64-bit Linux, what can I do to investigate why a CL_DEVICE_MAX_MEM_ALLOC_SIZE byte buffer would appear to being allocated in system memory?

0 Likes

That's my guess. I don't know for sure. Is it really the first allocation in the app? You allocate >4GB and a kernel has low performance?  

0 Likes

It is indeed the first allocation of the app but the size at which everything slows down is (found after some binary-searching) is actually 4645191681 bytes which doesn't seem to have any significance in binary or any relation to any of the device info values.

0 Likes

After the app allocates memory just run clEnqueueFillBuffer() (use clear pattern size 4 or 8 bytes) and measure performance. Do you see the drop with > 4645191681 bytes?

0 Likes

That would have been an excellent test but....after the machine got rebooted the issue no longer occurs. Thanks again for your help

0 Likes