I ask this because I eventually want to move away from the need for hardware vendors to constantly write drivers for every operating system. Perhaps platforms like windows and the mac will always require it, but maybe we can at least start with linux.
What I want to do is, basically hardware is hardware. Sure, they are all for different tasks and have different capabilities. But that doesn't mean they don't have a lot of things in common.
Take the latest and greatest from AMD processing and graphics. You all have to write a driver for linux, which is only 2% of the worldwide market, for each and every processor type and video card type, until the linux community creates their own. It's not even cost-effective for AMD to write these drivers for linux, they make little to no money off of that market, and I believe it is a waste of time for AMD.
I thought we could start with the hardest problem - CPUs, APUs, GPUs, graphics cards and processors.
This is also a challenging problem because I would also want to properly get the best of every feature out of each processor/graphics type.
What I would want is one of the following:
1. Write individual base sets of common code and sets of unique code so that the end result is for any current processor or video card, we only have to write a configuration file - no code necessary.
2. Invert the design, so instead of the kernel requiring a driver to be written by the hardware vendor, the CPU/GPU forms the base and then it is up to the kernel writers to make the best use of it, as an api. I believe this is the best design option, because now the hardware manufacturers are responsible for testing only their end of the hardware, and then if the api for the kernel changes it is up to the kernel writers to change their code, and test their end.
3. Be able to auto-detect any hardware, along with capabilities and best usage (this is probably impossible right now)
So I would assume that option 1 would require a ton of code that in most cases, probably there aren't commonalities with every type of processor.
I would assume with 3 that unless the motherboard can expose everything on it along with everything they are supposed to do as well as instructing the OS how best to use it, this is also a very difficult task.
But I believe #2, the inverted driver design, is interesting for a couple of reasons:
(a) If the driver is now the base system that the OS writes on top of (think today - kernel sits below driver, this would then be driver sits below kernel). This means two new things now:
(a1) The hardware vendor now only needs to write basically a 'kernel piece' like an OS would normally do, but all they care about is arriving at an API that the kernel can then write on top of. This means hardware vendors do not need to test their drivers on every OS - they just need to test their hardware. You all probably already do this to test that your hardware works correctly....
(a2) It then becomes the responsibility of the OS to make full and most efficient use of each hardware, and now they have an api to work off of. It is OK for them to still plan ahead for numerous types of hardware, but now it is the responsibility of the OS to do their part and write their system to make proper use of the hardware code.
(b) 'a' means now each member involved in hardware is responsible for writing an API that ensures their hardware works without error or flaw, then exposes an api to the kernel writers. If the API is standard in a lot of ways, and new features either expose the same api, or new api, or both, it is then the OS vendor's responsibility to best use the hardware. It means each party in the stack is responsible only for their own products...
(c) This should result in not only quicker market outcomes, instead of waiting for drivers to come out - almost immediately each new hardware would be supported by any OS that is staying up to date, and in many cases the API wouldn't change so new hardware is immediately supported.
(d) Hardware vendors now only need to write their end of the code once and it should then create an API for all OS's, and the OS only needs to update their end if the API changes (for instance, if new features are added by APUs that the kernel could use more efficiently)
Now, I could test out a lot of old and new processors and graphics boards to see if this works, but if you all want to do this with me we can at least arrive at a proof-of-concept, and maybe push a new industry standard...
Let me know what your thoughts are. I'm not a hardware vendor, and the dude I know that knows computer hardware has no knowledge of software. If I am going to do something like this, I would either need to see all of your AM2/3/+, FX/?/+, etc. driver code for either windows or linux, and go from there.
So I would need one of two things from AMD in order to do this:
1. Get sources for all driver code to date for all types of modern chipsets for processors and graphics or mixed, along with your own restrictions that I do not redistribute that code.
2. Collaborate with you all, since you already have the hardware, to see if we can re-write linux to do this, and then you can push for a new standard without me. What I would then do is completely re-write linux into a new system to make it more marketable and compete instead of being a bystander; however, then it would no longer be a unix or linux...
Let me know, I'll be watiing...