3 Replies Latest reply on Jun 3, 2010 1:11 AM by cjang

    Data transfer to Kernel

    Sheeep

      I have a problem. I want to calculate a Inversion of an mxn Matrix.

      So I want to transfer a int** to cl kernel. I tried host code:

          int **a = (int**)malloc(2*sizeof(int*));
          a[0]=(int*)malloc(5*sizeof(int));

          and:
           cl::Buffer    CL1=cl::Buffer(context,CL_MEM_READ_ONLY|CL_MEM_USE_HOST_PTR,sizeof(int)*2 ,a,&error);

       

      In the Kernel I tried to run:

       

      __kernel void add(__global const int *a,__global int *c){

                c[0]=a[0][0];

      }

      but I get an NDKernelRange error.

      I also tried __global const int **a with the same error.

      How can I transfer a int** or a int[][] to CL Device?

       

        • Data transfer to Kernel
          MicahVillmow
          All pointers inside of a kernel are single dimension. There are no pointers to pointers in OpenCL. In order to get your host code to match what OpenCL kernels expect, you need to transform it from an array of pointers to int into an array of ints.
          • Data transfer to Kernel
            lancaster

             

            Hi. I am curious to where you got the matrix inversion kernel from? or did you write it yourself. I have been looking everywhere for one. Can you please tell me where you got it from or (if you wouldn't mind) send me the code? 



              • Data transfer to Kernel
                cjang

                I am guessing that the Sheeep didn't mean matrix inversion but rather transposition?

                You may already well know this...

                Generally, explicit matrix inversion is avoided (i.e. never done!). Even if such an inverse exists, poor numerical conditioning makes the calculation impractical. For example, solving A*x = b for x is the same as calculating x = inverse(A)*b . It's almost always much cheaper to solve the linear system (especially if there is structure to exploit) rather than invert the system matrix directly.

                Instead of the matrix inverse, the singular value decomposition is often useful. Every matrix has one, unlike an inverse which does not exist if the matrix is singular. Latent semantic indexing and principal components analysis (PCA) are very closely related applications of this matrix factorization.