So finally I have got my APU test system (I paid for it!):
-CPU: AMD Ryzen 5 2400G
-MB: Asrock X470 Fatality Gaming mini-ITX
-RAM: G.Skill 3200 C14, 16GB*2
-OS: Windows 10 Pro
-IDE and compiler: Visual Studio 2017 Community
As it turns out, the exact same OpenCL code runs *slower* on the APU, comparing to running it on RX 480 (7950 not tested).
Here is my though, appreciated if you can provide some ideas as to what might be done to check the bottleneck.
-From host I created an array of 200e6 single-precision float (
A). Two more containers,
C of the same size are also created on host.
cl_mem buffers are created with flag
CM_MEM_USE_HOST_PTR with pointers to the above three containers, as
d_B, d_C with
cl_mem is additionally created as a temporary storage,
d_temp without using HOST_PTR flag. It has
-No mapping is done at all, as all operations are carried out by GPU alone. (Is this even correct? This seems to contradict many use case of USE_HOST_PTR)
-Two kernels are run,
kernel 1 is a scaling operation which do
kernal 2 reads
d_temp and create the outputs
d_B = d_temp*cos(global_id*k) and
d_C = d_temp*sin(global_id*k)
-Operations are finished. Buffers are freed on the GPU.
With the above, RX 480 spent around 0.40 sec, but APU spent up to 0.62 sec. I was expecting GPU to be faster as it was limited by PCI-E bus.
(The total transfer is 800 MB host-to-device, and 1600 MB device-to-host, should I expect shorter time from RX 480?)
I suspected I haven't done something to allow zero-copy, although I did make sure the 4k alignment and 64 kB buffer size was fulfilled.
Another guess is that, although now I removed the PCI-E bus limit, now with APU I am limited by the RAM bandwidth which is at max 40 GB/s. Still, I expected the time spent should be less.
Your comments are appreciated. If I wasn't clean somewhere and you wouldn't mind looking at the code, let me know and I am glad to share it.