cancel
Showing results for 
Search instead for 
Did you mean: 

Archives Discussions

opencl_JEDI
Journeyman III

Gaussian blur

Hi,

I'm trying to do a gaussian blur on an image, but all the algorithms I found are for separable gaussian (the blur is done horizontally then vertically), so it's 2 1-dimensionnal operations.

I'm looking for how to prform a single pass 2-dimansionnal gaussian blur.

 

Thanks

0 Likes
4 Replies
nou
Exemplar

that is bad idea. because with N wide convolution you got O(N^2) against O(2*N) for two pass.

but gausian blur is simple. just get N*N of input image multiple with convolution matrix and then sum up. thats all. but it is slow for anything larger thatn 3x3.

0 Likes

I did it in two passes, but now I have to implement it in a 2-dimension kernel, wich does a transformation on the image before gaussian blur, so even though it's slower, but I think it's the best suited to my application.

I tried image convolution with gauss coeficient but doesn't work,

is someone got the gaussian convolution matrixes, how the compute is done?

Thanks.

0 Likes

Hello.

In one of my projects a user-configurable general convolution with kernels of up to 9x9 entries is used.

I use a 5x5 blur kernel and the performance impact isn't that big on a 4850. Comparing that to a simmilar 3x3 kernel, the difference is marginal.

Once you set a maximum kernel size, the shader (in my case GLSL) is quite straight forward to implement.

So the short answer is: yes, it is possible.

Here's a very simple GLSL fragment shader (I bet it can be optimized a lot by someone more experienced with GLSL than me):

#version 130

in vec4 Color;
in vec2 TextureCoordinate;

const int c_iMaxKernelSize = 9 * 9;

uniform int u_iKernelSize;
uniform float u_fWeights[c_iMaxKernelSize];

uniform vec2 u_v2Offsets[c_iMaxKernelSize];
uniform sampler2D u_s2ScreenImage;

out vec4 FragColor;


void main()
{
     vec4 ColorSum = vec4(0.0);

     for(int i = 0; i < u_iKernelSize; i++)
     {
          vec4 ColorSample = Color * texture2D(u_s2ScreenImage, TextureCoordinate + u_v2Offsets[ i ]);

          ColorSum += ColorSample * u_fWeights[ i ];
     }

     FragColor = vec4(ColorSum.rgb, 1.0);
}


The shader needs the current kernel's size (number of entries in the array) in u_iKernelSize, an array with the kernel's weights (the kernel) in u_fWeights[] and an array of texel offsets (entry's uv distances from the center - the center is always {0.0, 0.0}) in u_v2Offsets[] to apply the convolution to the image bound to the sampler u_s2ScreenImage.

Keep in mind, that all of these parameters (uniforms) can be set only once after the kernel is available. This might be during program initialization or after loading a kernel configuration.

The corresponding vertex shader can be very simple and just has to pass a color and texture coordiantes in addition to the transformed vertex position.

I hope this helps.

Best regards,

Rob


PS: You can use the code freely (no warranties and no liability, though ). I posted it under the WTFPL version 2.

EDIT: As the color is not used here, you might remove it.

0 Likes

Hi,

If you want to perform 2D convolution in each thead, it would be a good idea to make use of local memory.

To convolve with 2D Gaussian mask, you have to:

1) flip the mask in both horizontal and vertical direction,

2) move mask center to the sample we are computing,

3) multiply each mask element with its overlapped sample,

4) sum.

0 Likes