Dear AMD,
it's been more than half a year since the OpenGL 4.3 specs came out (August 6th, 2012). At the time the greens immediately had a beta driver capable of running apps. You didn't.
Now the greens have a stable driver. You still don't.
Half a year has passed and there is still no driver supporting it. At least you'd release a beta driver for developers, but no... It seems like as if we're not as important to you anymore. When I had to choose which platform should I start developing on, the main factor was the developer support.
I know there is OpenCL, it is great and I love it, but when it comes to OpenGL interoperability (among many other things), the standard is still inadequate.
It would be a shame if developers had to switch to green hardware to make sure they can develop OpenGL 4.3 applications.
So I'd like to ask you: please hurry up releasing at least a beta driver!
nice job with the 13.4 driver!!!
keep it up
AMD Catalyst 13.4 WHQL drivers have been released. I know they have been in beta for several months and support a number of the OpenGL 4.3 features.http://support.amd.com/us/kbarticles/Pages/AMDCatalyst13-4WINReleaseNotes.aspx
I don't know which features you require but a quick copy and paste from the notes revealed the following:
For reference, the entire release notes are here: http://support.amd.com/us/kbarticles/Pages/AMDCatalyst13-4WINReleaseNotes.aspx
Is there a specification for GL_AMD_interleaved_elements somewhere?
I needed the compute shader functionality the most. I've seen the release notes, that is why I wrote nice job. They finally delivered
Hi,
I have been testing new OGL compute shader and storage buffer objects extension and found following bugs (13.4 on 7950):
(please note all the samples I use for testing this work correctly on Nvidia OGL 4.3 cards)
*using atomicMax and atomicMin on shared variables hang the GLSL compiler others like atomicOr are OK!
groupshared uint ldsZMax;
uint z;
atomicMax( ldsZMax, z );
*using a compute shader with following launch size and shared arrays usage:
#define BLOCK_SIZE 32
layout (local_size_x = BLOCK_SIZE, local_size_y = BLOCK_SIZE) in;
shared double As[BLOCK_SIZE*BLOCK_SIZE];
shared double Bs[BLOCK_SIZE*BLOCK_SIZE];
crashes with:
Compute shader(s) failed to link.
Compute link error: HW_UNSUPPORTED.
Compute shader not supported by hardware
diminishing BLOCK_SIZE to less than 32 seems to work.. I have tested using
layout (local_size_x = 32, local_size_y = 32) in;
isn't a issue so 32 should work as for this conf each of this two shared arrays is size 8192 (sizeof(double)*32*32) so total shared mem usage is 2*8192 and is equal to reported max
size (GL_MAX_COMPUTE_SHARED_MEMORY_SIZE: 32768).. I verify this issue is on shared mem size usage as using something like
(with BLOCK_SIZE=32):
shared double As[BLOCK_SIZE*BLOCK_SIZE-1];
shared double Bs[BLOCK_SIZE*BLOCK_SIZE];
seems to compile so please fix to be able to use not only 32767 bytes of shared mem but full 32768 bytes..
*using sbo on non compute shaders (like fragment shaders seems no be not correct)
*getting GL_MAX_COMPUTE_WORK_GROUP_COUNT and GL_MAX_COMPUTE_WORK_GROUP_SIZE I get using debug_output bug:
glGetIntegerv parameter <pname> has an invalid enum '0x91be' (GL_INVALID_ENUM)
other new like GL_MAX_COMPUTE_ATOMIC_COUNTERS seem to work..
Related altough no_attachments extension is not adversited new entry points are present so I played with it using default and seems a simple test works on 79xx but not on 58xx
series..
glGenFramebuffers(1,&noat);
glBindFramebuffer(GL_FRAMEBUFFER_EXT,noat);
glFramebufferParameteri(GL_FRAMEBUFFER_EXT,GL_FRAM EBUFFER_DEFAULT_WIDTH, w);
glFramebufferParameteri (GL_FRAMEBUFFER_EXT,GL_FRAMEBUFFER_DEFAULT_HEIGHT, h);
a sample using this works on 7xxx series but not on 5xxx series..
Hi,
I have been testing new OGL compute shader and storage buffer objects extension and found following bugs (13.4 on 7950):
(please note all the samples I use for testing this work correctly on Nvidia OGL 4.3 cards)
*using atomicMax and atomicMin on shared variables hang the GLSL compiler others like atomicOr are OK!
groupshared uint ldsZMax;
uint z;
atomicMax( ldsZMax, z );
*using a compute shader with following launch size and shared arrays usage:
#define BLOCK_SIZE 32
layout (local_size_x = BLOCK_SIZE, local_size_y = BLOCK_SIZE) in;
shared double As[BLOCK_SIZE*BLOCK_SIZE];
shared double Bs[BLOCK_SIZE*BLOCK_SIZE];
crashes with:
Compute shader(s) failed to link.
Compute link error: HW_UNSUPPORTED.
Compute shader not supported by hardware
diminishing BLOCK_SIZE to less than 32 seems to work.. I have tested using
layout (local_size_x = 32, local_size_y = 32) in;
isn't a issue so 32 should work as for this conf each of this two shared arrays is size 8192 (sizeof(double)*32*32) so total shared mem usage is 2*8192 and is equal to reported max
size (GL_MAX_COMPUTE_SHARED_MEMORY_SIZE: 32768).. I verify this issue is on shared mem size usage as using something like
(with BLOCK_SIZE=32):
shared double As[BLOCK_SIZE*BLOCK_SIZE-1];
shared double Bs[BLOCK_SIZE*BLOCK_SIZE];
seems to compile so please fix to be able to use not only 32767 bytes of shared mem but full 32768 bytes..
*using sbo on non compute shaders (like fragment shaders seems no be not correct)
*getting GL_MAX_COMPUTE_WORK_GROUP_COUNT and GL_MAX_COMPUTE_WORK_GROUP_SIZE I get using debug_output bug:
glGetIntegerv parameter <pname> has an invalid enum '0x91be' (GL_INVALID_ENUM)
other new like GL_MAX_COMPUTE_ATOMIC_COUNTERS seem to work..
Related altough no_attachments extension is not adversited new entry points are present so I played with it using default and seems a simple test works on 79xx but not on 58xx
series..
glGenFramebuffers(1,&noat);
glBindFramebuffer(GL_FRAMEBUFFER_EXT,noat);
glFramebufferParameteri(GL_FRAMEBUFFER_EXT,GL_FRAMEBUFFER_DEFAULT_WIDTH, w);
glFramebufferParameteri (GL_FRAMEBUFFER_EXT,GL_FRAMEBUFFER_DEFAULT_HEIGHT, h);
a sample using this works on 7xxx series but not on 5xxx series..