cancel
Showing results for 
Search instead for 
Did you mean: 

Archives Discussions

cjb80
Journeyman III

Embedded Application & Graphics Overhead in GPU Architecture

I am working on some research into the usage of GPUs for computing in a military application. I am currently writing up a paper about my work and I am curious about a few items:

1) Is there any information about the transistor overhead associated with the rendering/texture hardware on a GPU? In other words, there is some amount of hardware on the chip that is dedicated for rendering purposes that is unused when the GPU is used for GPGPU work.  Is this significant? Could the chip's efficiency be further improved for GPGPU workloads if it did not have the graphics related hardware?

2) The embedded GPU that was recently announced has something like 1/4 the computing capability of the newer GPUs.  Are there technical challenges which is limiting these processors?  (e.g., cooling?)  Is there naything on the horizon which would brings these chips on parity with more recent GPUs?

Thanks

Chris

0 Likes
2 Replies
Meteorhead
Challenger

Maybe some naive and unofficial answers to your questions.

1) There aren't many unsused components on GPUs that remain unused when doing GPGPU. The vast majority of the die is occupied by shaders and cache. There are some fixed function units that are unused (tesselator, Universal Video Decoder) but these hold about 1/90 of the die (roughly). Even if one is thinking about military usage, it isn't deniable that GPUs are so efficient (and cost-saving) because they are developed for wide range commercial use and no extra engineering work is required for professional usage. Certanly removing these fixed function units could increase performance roughly by 5-8%, but it would triple it's cost. (Less people would buy these chips, but desing cost would still apply and crafting so few chips increases fixed expenses relative to income) So in my opinion it is futile to go in this direction.

2) I would very much welcome something similar to what you suggested, server processors that have 2 x86 cores for eg. and the rest of the die could be used for a capable GPU. Pulling out performance close to high-end GPUs would not be possible however. GPUs have far more active processing elements as CPUs, and as such they heat a lot more. If you look at the cooling system of a high-end GPU and even a server class CPU, they cannot even be mentioned on the same day.

This might change in the next generation of fabrication, where heat produced relative to compute power will decrease. What sounds really amazing is that (if I'm not mistaken) AMD wishes to release next gen Wimbledon GPU in the notebook market. They have stated before that although MXM standard allows 100W notebook graphics, they don't see the logic of creating such an overheated chip into a notebook. So most likely next gen high-end graphics processor will work with 65-75W and notebook cooling. Most likely it will be seriously underclocked compared to desktop variant, but it will perform hopefully at about 1.8-2TFlops in SP, and it will do with ~70W and with cooling far fit for blade servers.

0 Likes

Thanks for your reply, that was very helpful.

I don't think that my employer wouldn't have a problem implementing some interesting cooling systems if we could get an embedded chip with the computing capacity of a 69xx chip..   The laptop line does sound promising...

Thanks again

0 Likes