cancel
Showing results for 
Search instead for 
Did you mean: 

Archives Discussions

coderalex
Journeyman III

AMD fusion openGL advanced shaders

Hello,

I'm working on some openGL code and I had intended on using tessellation and geometry shaders in my program.

However when I first did performance tests for the laptop I'm using now I got some disturbing/strange results...

With basic vertex and fragment shaders the code runs incredibly smoothly, outperforming the old desktop I used before by multiple times, probably the benefit of a much higher data bandwidth, but when I add either a geometry shader or tessellation shaders, even when they're just pass-through shaders that shouldn't have any significant effect, the performance takes a MASSIVE hit. And I really mean massive.

I tested with two different programs and got these results:

increase in frame time from about 4 ms to around 55 ms

increase in frame time from about 9 ms to around 250-300 ms

Normally there's a small cost to adding additional shaders but this is just absurd.

Here's some details:

My GPU (well really APU) is an AMD Radeon HD 7660G, pretty new and part of the a-series (says A10) I believe

It has support for openGL 4.2 (which is more than enough for tessellation and geometry shaders)

In the programs my context version is 4.2.11476 Compatibility Profile Context

The programs still do what there's supposed to but with just horrible performance (a software render-er might go faster o_O)

I've tried new drivers but it didn't make any difference.

Is it possible that the hardware doesn't have normal support for geometry or tessellation shaders and the driver is just emulating it?

Could they actually do that to have openGL 4.2 support? Geometry shaders are pretty old and this is a very new graphics card (although on a laptop) but still... I really have no idea (I hope it's just a bug or something).

Could anyone please shed some insight into this mystery?

Has anyone here tried running geometry or tessellation shaders on this or a similar APU?

Many thanks in advance.

0 Likes
4 Replies
gsellers
Staff

Hi,

All of our current graphics processors, including APUs support OpenGL 4.2 full in hardware. We don't even have a software implementation of tessellation, so that cannot be the issue. You should expect some performance hit from enabling GS or tessellation, but your figures do seem pretty high. We'll do some performance testing on an APU to see if we can replicate your results. If we can't we may need to get a test case from you.

Cheers,

Graham

Well I'm still stumped...

I've tried making different variations to the code, rendering with different methods and such but nothing has had any significant effects yet, and there aren't many similarities between the programs I've tried this with. I could post the code for one of my programs, although it's in lwjgl which could be inconvenient. Otherwise, if it means anything, I've tried this on two different drivers with the same results, 8.942.7.0 and 8.982.0.0. It could just be something in my code too; before this everything I tested was on nvidia gpus; of course I'm not using anything vendor specific, but the hardware definitely can make some difference in terms of what code will work well and what won't.

Bit of an update...

I went through the process of adapting another simple program to test performance, this time using completely different code from an example LWJGL shader program in case my coding style was causing problems... ended up with the same results. Adding a geometry shader (or tessellation) appears to mainly effect performance through a decrease in data bandwidth (the performance effect is proportional to amount of data being transferred for rendering). The data speeds usually decrease by around 7 times, and then there's sometimes an additional loss of performance from what appears to be a loss of effectiveness of certain optimizations. For instance, for programs with very large amounts of data transfer I can usually increase speeds if, when using indexed VBOs, the same vertices appear multiple times in close succession, and also from using a primitive restart index. After adding geometry shaders though these optimizations don't seem to work well anymore as that accounts for the differences in the performance impacts I've had from adding a geometry shader for different programs (range of 7 times performance reduction to nearly 30 times reduction)... Idk, perhaps some of that info could be helpful; just disregard it if is isn't.

0 Likes
marshalll
Journeyman III

thanks for the information.

AMD Fusion is the marketing name for a series of APUs by AMD, aimed at providing good performance with low power consumption, and integrating a CPU and a GPU based on a mobile stand-alone GPU. There has subsequently been a disagreement between Arctic (formerly Arctic Cooling) and AMD over the use of the "Fusion" brand name. AMD has thus changed Fusion to Heterogeneous Systems Architecture (HSA).Fusion was announced in 2006 and has been in development since then. The final design is the product of the merger between AMD and ATI, combining general processor execution as well as 3D geometry processing and other functions of modern GPUs  into a single die.This technology was shown to the general public in January 2011 at CES.

0 Likes
coderalex
Journeyman III

Final update (I hope).

Well I still have no idea what specifically is causing this but what I do know now is that the problem is somehow related to the eclipse program which I'm programming and testing on (so probably not the hardware). I was able to get executable jars to run properly without the performance hit. So many possible variables and I just didn't consider that one (when I should have)... Sorry for bothering anyone, thanks for the effort to help.

problems still there fix was a fluke. But I won't bother you guys anymore...

0 Likes