1 of 1 people found this helpful
All of our current graphics processors, including APUs support OpenGL 4.2 full in hardware. We don't even have a software implementation of tessellation, so that cannot be the issue. You should expect some performance hit from enabling GS or tessellation, but your figures do seem pretty high. We'll do some performance testing on an APU to see if we can replicate your results. If we can't we may need to get a test case from you.
Well I'm still stumped...
I've tried making different variations to the code, rendering with different methods and such but nothing has had any significant effects yet, and there aren't many similarities between the programs I've tried this with. I could post the code for one of my programs, although it's in lwjgl which could be inconvenient. Otherwise, if it means anything, I've tried this on two different drivers with the same results, 8.942.7.0 and 8.982.0.0. It could just be something in my code too; before this everything I tested was on nvidia gpus; of course I'm not using anything vendor specific, but the hardware definitely can make some difference in terms of what code will work well and what won't.
Bit of an update...
I went through the process of adapting another simple program to test performance, this time using completely different code from an example LWJGL shader program in case my coding style was causing problems... ended up with the same results. Adding a geometry shader (or tessellation) appears to mainly effect performance through a decrease in data bandwidth (the performance effect is proportional to amount of data being transferred for rendering). The data speeds usually decrease by around 7 times, and then there's sometimes an additional loss of performance from what appears to be a loss of effectiveness of certain optimizations. For instance, for programs with very large amounts of data transfer I can usually increase speeds if, when using indexed VBOs, the same vertices appear multiple times in close succession, and also from using a primitive restart index. After adding geometry shaders though these optimizations don't seem to work well anymore as that accounts for the differences in the performance impacts I've had from adding a geometry shader for different programs (range of 7 times performance reduction to nearly 30 times reduction)... Idk, perhaps some of that info could be helpful; just disregard it if is isn't.
thanks for the information.
AMD Fusion is the marketing name for a series of APUs by AMD, aimed at providing good performance with low power consumption, and integrating a CPU and a GPU based on a mobile stand-alone GPU. There has subsequently been a disagreement between Arctic (formerly Arctic Cooling) and AMD over the use of the "Fusion" brand name. AMD has thus changed Fusion to Heterogeneous Systems Architecture (HSA).Fusion was announced in 2006 and has been in development since then. The final design is the product of the merger between AMD and ATI, combining general processor execution as well as 3D geometry processing and other functions of modern GPUs into a single die.This technology was shown to the general public in January 2011 at CES.
Final update (I hope).
Well I still have no idea what specifically is causing this but what I do know now is that the problem is somehow related to the eclipse program which I'm programming and testing on (so probably not the hardware). I was able to get executable jars to run properly without the performance hit. So many possible variables and I just didn't consider that one (when I should have)... Sorry for bothering anyone, thanks for the effort to help.
problems still there fix was a fluke. But I won't bother you guys anymore...