I think you may find some interesting documentation about the RV770 in the Stream SDK main page:
In the download section there are several pdfs, have a look at "AMD R700-Family Instruction Set Architecture".
thanks a lot! perfect
Note that the R700 ISA does not include any information about cache sizes or latencies to cache. The cache sizes are not public AFAIK but speculation is 8kB/SIMD L1 (total 80kB L1) and 64*4=256kB L2 total for RV770.
Rahul, You can find more information about cache's and how they work in slides that were recently posted of how we optimized ACML-Sgemm for RV670 hardware. R770 has a very similar cache structure except instead of having a 4 way L1 cache, each SIMD gets its own L1 cache.
More information can be found in documents here:
Regarding on-chip memory, I have been trying to figure out which memory is relevant to GPGPU computation and which is purely (mostly) beneficial to traditional graphics workloads.
Local Shared Memory (16kb per SIMD): This seems to be "general purpose"/"scratch" memory that IS useful for GPGPU and has no coherancy
Global Shared Memory (16kb): This seems to be "general purpose"/"scratch" memory that IS useful for GPGPU and has no coherancy
L1 (8kb per SIMD?) (coherancy?): This is a Texture Cache and is not really suited to GPGPU
L2 (size?) (Read-only no coherancy?): This is connected to the Memory Controller from what I can see so it is (implicitly) used when accessing RAM. Is it also used when accessing Local & Global Shared Memory?
Texture Cache (size?) (coherancy?): does this exist? or are L1 & L2 both "Texture Cache"?
If anyone could agree/disagree/discuss my comments above it would be greatly appreciated.
Micah: Thanks for the link.
I don't need detailed information, just basic explanations to my questions would be great, and I'm sure the AMD employees could answer these questions easily...
From another thread...
Originally posted by: n0thing ATI's current OpenCL implementation is only for the CPU and caching is automatic from the main memory, so everything is cached I guess except for texture buffers as the textures are not supported in CPU's implementation [ requires Fixed function logic like texture units and samplers ]
For the GPU implementation here are my predictions:
1. Texture Cache : There is a texture cache per SIMD unit, 8kb I think(on rv770). Texture caches are optimized for spatial coherence in texture fetches so you don't need to coalesce as it is automatically done by the tiled rasterization order (fetching a quad of texels) of textures.
2. Local memory on rv770(LDS) is 16KB per SIMD unit, (R800 should be 32kb as it should support DX11). This memory is configured with 4 banks, each with 256 entries of 16 bytes. So you can read upto 4 aligned 32 bit words in 1 read access from the LDS. Writes have no bank conflicts as each thread can only write to its private location, hence the LDS is not as generic as shared memory specified by OpenCL specification. R800 should support OpenCL's shared memory.
3. Constant cache is 64KB, no idea about coalescing.
4. OpenCL specification says : Reads and writes to global memory may be cached depending on the capabilities of the device.
Here is what OpenCL specification says about constant address space :
The __constant or constant address space name is used to describe variables allocated in global memory and which are accessed inside a kernel(s) as read-only variables. These read-only variables can be accessed by all (global) work-items of the kernel during its execution. This qualifier can be used with arguments to functions (including __kernel functions) that are declared as pointers, or with local variables inside a function declared as pointers, or with global variables. Global variables declared in the program source with the __constant qualifier are required to be initialized.
Your algorithm choice will determine the answer to which memory is important. For example, simple_matmult does not use LDS or GDS and relies on the texture cache and outperforms many if not all matrix mul algorithms on the RV770 that attempt to use LDS. NLM_Denoise also outperforms the equivalent algorithm that uses LDS.
So it isn't necessarily GPGPU/graphics in general that determine what memory you use, but your problem domain and algorithmic choice that should drive the decision.
Ok, thanks. But in general is it fair to say that the texture cache is an artifact of graphics workloads and was no intended to be used this way?
Also, is it possible to confirm/deny my other "?" in the above post please?
The slides should give you all of that information. If not please let me know.
It's not clear to me. Low level development is not my specialty. Also, I am viewing these slides in Open Office and many of the images seem to be poorly formatted... but mostly, I'm inexperienced with all of this still :-)
If you can clarify the questions please do.