Due to the super-secret nature of commercial GPU, I didn't find anything interesting yet.
The best thing I can do is asking that to someone which has more experience than me (everyone). It's quite a huge question, I hope it'll fit the rules.
Also because GPU aren't directly related to this network, but there are many others, so I tried.
Because quotes parallel processing and because older GPUs (from retro console and other) uses architectures very different from the modern one, this question is about recent GPUs (the ones supporting GPGPU). To semplify the question, think about NVidia and AMD ones.
My idea of GPU is quite confusing: I know it's a particular piece of hardware, which primary components are tons of cores working parallel, used to elaborate images or general signal, and a quite high amount of RAM used to buffer thing and make elaboration faster
Too much "Wikipedia" for me
These are some of my thesis and information I got (HELP section says to share your research), feel free to confute them:
Point 1: What happens when data send form the CPU enter inside the GPU?
- You can't write GPU programs in it own assembly: a program written in, for example, Cuda is PTX, an intermediate language that is elaborate (compiled? interpretated?) internally by the GPU. If there is an internal language, a sort of assembly (maybe a microcode like architecture?), it is super-secret
- There are no official open source drivers. May you want to find something reversing the drivers? Good luck. But there is a good news: some GPU has a quite good doc.
Point 2: Pipeline and shaders
What is a shader? Difficoult to answer. But someone tried. However, shader should be a piece of software that is elaborate by the GPU to render all the beauty 3D scenes. There are more type of shaders, each one is elaborate in a particular step of the GPU pipeline.
But what's that pipeline? Seems to be a sequence of step that goes from receving input from the cpu to the render in output. DirectX and OpenGl should give an astraction of that implementation, but each model/family has a different implementation? Is that a super-secret or it's freely shared with us, poor community? (Or maybe with NDA?)
Point 3: Metaphore
I want to avoid something like "This question is too long. We hate you. Question closed"
Probably you are right, but I still try to save my question.
The faster way to answer this question is making an example: can the GPU look like another piece of hardware? Maybe one not conceptually sophisticated.
In my opinion, it can be viewed partially like an FPGA here on Kynix because of the parallel execution. Many cores = many IP = FPGA? Maybe. But obviously that is a completely different hardware, much more flexible and suitable for different scenarios.
Can you make other examples? Any help is really appreciated from the bottom of my hearth.
If you want, you can also post any resource-book-readings you think it's related to the topic. I really like to go in depth