2 Replies Latest reply on Jul 1, 2008 7:13 PM by sgratton

    read/write global buffers?

    sgratton
      Hi there,

      Is it possible to use a global buffer with CAL for both reading and writing to? I'd like to do an in-place operation (Cholesky factorization) on a big matrix. If this is possible, how should I allocate the buffer (and are any resource declarations needed)?

      The calculation would be structured to hopefully ensure that there were no read/write problems, i.e. each kernel would only read from one part of the matrix and only write to another part, or each thread would read and then write to its individual element.

      Thanks,
      Steven.
        • read/write global buffers?
          MicahVillmow
          Steven,
          The global buffer can have both methods of accessing it used at the same time. The easiest way to do it is to allocate a resources that is equivalent in size to both of your resources and then pass in the offset to the output memory as an integer constant in the constant buffer.

          Just remember that there is no guaranteed synchronization between threads and the only data that you can read back reliably is the data that each thread wrote.

            • read/write global buffers?
              sgratton

              Hi Micah,

              Thanks for your reply. It's great that one can both read and write to a global buffer. However, having tried to play with them a bit, I am still a bit confused and wonder if you could shed any light on the following:

              They always seem to be addressed in IL as 1D things, so does it make much difference if they are allocated in CAL as 1D or 2D local resources using calResAllocLocal1D/2D with the global buffers flag, and should/does the 8192 size limit on dimensions still apply? (I think the hardware guide says that global buffers are limited only to the size of the local memory.) The hardware guide also talks about cached vs uncached reads of a global buffer; how does one decide or control which to use?

              Thanks a lot,
              Steven.