I created a 3D buffer using the clCreateImage3D(...) and then using the clEnqueueWriteImage(...) to copy the host memory to the device buffer.
I am confused abut the computation of the input_slice_pitch value. The spec says that the value must be greater or equal to row_pitch * height.
How to calculate the optimal value of input_slice_pitch ?
There is no "optimal" slice pitch. The purpose of the field is to inform the driver about the location in memory of each slice. If your data is packed in (host) memory, you can leave that field set to zero, and the driver will compute the pitch automatically. On the other hand, if you have padding betweeen the slices, you have to set the field to the number of bytes that separate the beginning of each slice from the beginning of the next.
There won't be a row_pitch or slice_pitch for a 1-D texture array. Row pitch is only valid for 2D texture arrays, where there is some padding used in the memory. Suppose you have a memory region of 2048 X 2048 elements. But now you want to create a cl_image using only 1/4 of the elements, using the top quadrant. So cl_image dimension are 1024 X 1024. Here row_pitch value will be 2048, and not 1024.
Alternately, if you decided to create cl_image of dimension 2048 X 512, row_pitch will be 2048 again. But this time you are not having any padding between your cl_image. So you can simply leave row_pitch as zero, and compiler automatically computes it properly.
similar argument is valid for slice_pitch, which comes in picture for 3D texture arrays. Hope this helps.