We using compute shaders to calculate fractal noise. The problem is, that we can't compute larger inputs than 4096 vectors. After we exceed this line, the shader returns the same value for all remainders, or skips it.
As you can see, if the resolution exceeds 64x64 (4096) it breaks.
When we have no input buffer (means just creating random test values in shader, and putting them into the output buffer), we don't get these problems at all. So I'm pretty sure its input buffer bug.
Here's how we run it from input to the finished result:
public float[] GetValues(Vector4[] input) { // Takes Vec4 1D array as input. Finished noise is 1D float array as output
GL.UseProgram(program);
// Generate Input Buffers
int inBuffer = GL.GenBuffer(); // First buffer contains the vec4 data and is our "problem child"
GL.BindBuffer(BufferTarget.ArrayBuffer, inBuffer); // No difference using ArrayBuffer or ShaderStorageBuffer
GL.BufferData(BufferTarget.ArrayBuffer, new IntPtr(Vector4.SizeInBytes * input.Length), input, BufferUsageHint.StaticDraw);
GL.BindBufferBase(BufferTarget.ShaderStorageBuffer, 0, inBuffer); // Bind buffer to shader location 0
int inPermBuffer = GL.GenBuffer(); // Second input is permutation data, its size is only a kb and makes no problems