5 Replies Latest reply on May 30, 2011 4:57 PM by rick.weber

    get_image_dim returns wrong result on radeon 5870

    rick.weber

      When I run my kernel on the CPU, get_image_dim() returns the correct result, but the wrong result when run on my Radeon 5870. Also, what precisely are the restrictions on image dimensions? The width must be a multiple of 64? Height and width must be a power of 2?

        • get_image_dim returns wrong result on radeon 5870
          MicahVillmow
          There should be no restrictions at the OpenCL level on images. We are passing OpenCL conformance which tests this function directly so you might have hit a corner case. As always a test case that shows the problem would be greatly beneficial in helping us track it down.
            • get_image_dim returns wrong result on radeon 5870
              rick.weber

              This problem is really bizarre. It happens under the attached example. If I change #if 1 to #if 0, the problem goes away. Alternatively, I can get rid of types.cl and not pass the 2nd parameter. All other code is commented out. To make matters even weirder, only the dimesions on database are wrong, while the dimensions on spectrum are always correct.

              When I allocated database, I got CL_SUCCESS, and have run no kernels between allocation and this. I have run other allocations and deviceWrites. I'm going to remove even more stuff and see if I can further isolate the problem. Furthermore, this problem occurs ONLY on my Radeon 5870; the results are correct on my Core i7.

              main.cl: #include <specmaster.cl> #include <debug.cl> types.cl: #pragma once typedef struct _MSScanInfo { unsigned int mScanNum; unsigned int mMinMZ; unsigned int mMaxMZ; unsigned int mTotalCurrent; unsigned int mMSLevel; unsigned int mNumPeaks; unsigned int mPrecursorMZ; unsigned int mPrecursorCharge; unsigned int mNumHoles; unsigned int mCharge; unsigned int mIsGoodSpectra; }MSScanInfo; typedef enum _BYIon { kBIon, kYIon }BYIon; specmaster.cl: #pragma once //#include <clUtilImage1D.cl> //#include <tables.cl> #include <types.cl> //#include <scoreHelpers.cl> //#include <constants.cl> #pragma OPENCL EXTENSION cl_amd_printf : enable #if 0 #if 0 #define __BARRIER(type) \ printf("\t__BARRIER hit by thread %d\n", get_local_id(0));\ barrier(type); #else #define __BARRIER(type) \ barrier(type); #endif #endif #if 1 __kernel void foo(write_only image2d_t bar) { int2 coord = (get_global_id(0), get_global_id(1)); float4 output = (0.0f, 0.0f, 0.0f, 0.0f); write_imagef(bar, coord, output); } #endif __kernel void matchSpectra(read_only image2d_t spectrum, __global MSScanInfo* scanInfo, read_only image2d_t database, __global unsigned int* peptideMasses, unsigned int numPeptides, __global float* peptideScores, __global unsigned int* peptideScoresIdx) { peptideScoresIdx[0] = get_image_width(database); peptideScoresIdx[1] = get_image_width(spectrum); } debug.cl: __kernel void viewMemory(__global void* memory) { return; } __kernel void getImageInfo(read_only image2d_t theImage, __global unsigned int* info) { info[0] = get_image_width(theImage); info[1] = get_image_height(theImage); }

                • get_image_dim returns wrong result on radeon 5870
                  rick.weber

                  To make matters stranger, getImageInfo DOES return the correct the correct dimensions for the image, and I call it immediately after matchSpectra(). This means the image isn't corrupted on the device.

                  I've commented out pretty much everything except allocations used by the matchSpectra() kernel, so there's nothing else that can be causing this. I can access the pixels themselves, but when the dimensions returned are incorrect, my program can't emulate a 1D texture like it needs. I guess there are two workarounds: explicitly pass the leading dimension to the kernel or get rid of other routines that access images in the same file. Neither is attractive, since the former since is cumbersome and the latter requires changing a library I have.