8 Replies Latest reply on Jun 7, 2010 3:59 PM by MicahVillmow

    load texture data preservation


      Do you respect 100% the data passed from the host pointer in a clCreateImage2D call? or do you perform some kind of data conversion/filtering when you upload the data from the CPU to the GPU?

      The format is GL_RGBA and GL_FLOAT.


      I read a float4 with read_imagef(img,samp,int2). The .xyz are floats and the .w is really an int.


      I ask you this because  I have some very sensible data that I need to convert using as_int and seems the data is not correctly preserved when I launch the kernel... Or perhaps your as_int() is bugged?(I can't find a call to as_int() in your examples to test). Or perhaps the endianness is affecting my data ( but I upload the data from an Intel CPU to the GPU via clCreateImage2D-hostPtr ...)


      I'll put a code example so you can understand my question better:


      Host code:

      cl_float4 data;

      data.s[0] = 1.0f;

      data.s[1] = 2.0f;

      data.s[2] = 3.0f;

      int i = -4;

      data.s[3] = *((float*)&i);


      .... then I create a 1x1 image:

      cl_image_format l_sImageFmt;
      l_sImageFmt.image_channel_data_type = CL_FLOAT;
      l_sImageFmt.image_channel_order = CL_RGBA;

      img = clCreateImage2D ( ctx, CL_MEM_READ_ONLY|CL_MEM_COPY_HOST_PTR, &l_sImageFmt,
                      1, 1, sizeof(cl_float4), &data, NULL );


      GPU Kernel:

      __constant sampler_t imgSamplerInt2 = CLK_NORMALIZED_COORDS_FALSE | CLK_ADDRESS_NONE | CLK_FILTER_NEAREST;


      const float4 data = read_imagef ( img, imgSamplerInt2, (int2)(0,0) );

      const int i = as_int(data.w); //i should be -4, right? Well, it is NOT! (it's thrash )






      And btw... as_int() is much faster than a normal int cast if I have the data properly formatted, isn't it?