for APUs, both memory regions are on one SDRAM. So memory transfers are basically memcpy's.
No PCI transfers should be involved.
So why does a developer even has to bother about all this zero copy stuff?
My guess is that GPU device is not able to access random RAM. Virtual memory issues, swapping e t.c. So the RAM is split into 2 parts: One for CPU and another for GPU.
I know it would be great just to do map, write/read, unmap, right? :)
But wait... I don't see what prevents CPU from accessing memory dedicated to GPU. Hmmm.
As you'll see if you watch the keynote talks from AFDS while the future Fusion architecture plans are what you would expect, the current design uses two different memory spaces. While memory is in the same physical chips the virtual space for that memory is different: there is still a concept of video memory. This is for many reasons, partly the complexity in making single step big hardware design changes, partly the driver model and some others. The net result is that you have to tell the driver that the memory you are using is to be accessed through particular parts of the memory system.
Even beyond that you have to consider that what you do has to fit behind the rules of hte OpenCL specification. For example, while you don't actually have to map and unmap buffers under zero copy rules on Fusion architectures the OpenCL spec says you need to do that to guarantee memory availability, so you should still try to be portable.
Originally posted by: LeeHowes As you'll see if you watch the keynote talks from AFDS while the future Fusion architecture plans are what you would expect, the current design uses two different memory spaces.
What are those mentioned "future Fusions" APUs? And when are they available?
I'm afraid that that information is not yet public. Public information on the subject is what Phil Rogers described in his AFDS keynote.
Retrieving data ...