cancel
Showing results for 
Search instead for 
Did you mean: 

Archives Discussions

happyturtle
Journeyman III

Driverless kernel design...

I ask this because I eventually want to move away from the need for hardware vendors to constantly write drivers for every operating system.  Perhaps platforms like windows and the mac will always require it, but maybe we can at least start with linux.

What I want to do is, basically hardware is hardware.  Sure, they are all for different tasks and have different capabilities.  But that doesn't mean they don't have a lot of things in common.

Take the latest and greatest from AMD processing and graphics.  You all have to write a driver for linux, which is only 2% of the worldwide market, for each and every processor type and video card type, until the linux community creates their own.  It's not even cost-effective for AMD to write these drivers for linux, they make little to no money off of that market, and I believe it is a waste of time for AMD.

I thought we could start with the hardest problem - CPUs, APUs, GPUs, graphics cards and processors.

This is also a challenging problem because I would also want to properly get the best of every feature out of each processor/graphics type.

What I would want is one of the following:

1.  Write individual base sets of common code and sets of unique code so that the end result is for any current processor or video card, we only have to write a configuration file - no code necessary.

2.  Invert the design, so instead of the kernel requiring a driver to be written by the hardware vendor, the CPU/GPU forms the base and then it is up to the kernel writers to make the best use of it, as an api.  I believe this is the best design option, because now the hardware manufacturers are responsible for testing only their end of the hardware, and then if the api for the kernel changes it is up to the kernel writers to change their code, and test their end.

3.  Be able to auto-detect any hardware, along with capabilities and best usage (this is probably impossible right now)

So I would assume that option 1 would require a ton of code that in most cases, probably there aren't commonalities with every type of processor.

I would assume with 3 that unless the motherboard can expose everything on it along with everything they are supposed to do as well as instructing the OS how best to use it, this is also a very difficult task.

But I believe #2, the inverted driver design, is interesting for a couple of reasons:

(a)  If the driver is now the base system that the OS writes on top of (think today - kernel sits below driver, this would then be driver sits below kernel).  This means two new things now:

(a1) The hardware vendor now only needs to write basically a 'kernel piece' like an OS would normally do, but all they care about is arriving at an API that the kernel can then write on top of.  This means hardware vendors do not need to test their drivers on every OS - they just need to test their hardware.  You all probably already do this to test that your hardware works correctly....

(a2)  It then becomes the responsibility of the OS to make full and most efficient use of each hardware, and now they have an api to work off of.  It is OK for them to still plan ahead for numerous types of hardware, but now it is the responsibility of the OS to do their part and write their system to make proper use of the hardware code.

(b) 'a' means now each member involved in hardware is responsible for writing an API that ensures their hardware works without error or flaw, then exposes an api to the kernel writers.  If the API is standard in a lot of ways, and new features either expose the same api, or new api, or both, it is then the OS vendor's responsibility to best use the hardware.  It means each party in the stack is responsible only for their own products...

(c) This should result in not only quicker market outcomes, instead of waiting for drivers to come out - almost immediately each new hardware would be supported by any OS that is staying up to date, and in many cases the API wouldn't change so new hardware is immediately supported.

(d) Hardware vendors now only need to write their end of the code once and it should then create an API for all OS's, and the OS only needs to update their end if the API changes (for instance, if new features are added by APUs that the kernel could use more efficiently)

Now, I could test out a lot of old and new processors and graphics boards to see if this works, but if you all want to do this with me we can at least arrive at a proof-of-concept, and maybe push a new industry standard...

Let me know what your thoughts are.  I'm not a hardware vendor, and the dude I know that knows computer hardware has no knowledge of software.  If I am going to do something like this, I would either need to see all of your AM2/3/+, FX/?/+, etc. driver code for either windows or linux, and go from there.

So I would need one of two things from AMD in order to do this:

1.  Get sources for all driver code to date for all types of modern chipsets for processors and graphics or mixed, along with your own restrictions that I do not redistribute that code.

2.  Collaborate with you all, since you already have the hardware, to see if we can re-write linux to do this, and then you can push for a new standard without me.  What I would then do is completely re-write linux into a new system to make it more marketable and compete instead of being a bystander; however, then it would no longer be a unix or linux...

Let me know, I'll be watiing...

0 Likes
2 Replies
happyturtle
Journeyman III

I need to clarify the implications of doing these things so everyone understands...

Option 1, config file for all hardware:

The idea here would be for all hardware device drivers to find common sets of properties that the OS exposes to it, so any hardware that the OS 'identifies' or any new hardware just needs a configuration file in order for the operating system to use it.

The problem here is it still results in OS-specific and OS-specific tested drivers by hardware vendors, AND it potentially reduces to a large extent the full capabilities and design of hardware.

Option 3, auto-detect hardware:

This is similar to option 1, only instead of config files the various OS vendors agree on a base set of API's just like hardware vendors of CPUs and GPUs do today.  The plus side to this is that since device drivers are now a standard between operating systems, now any OS knows the API it needs to expose to any hardware - this way it becomes OS neutral, and the hardware vendor then only needs to test on one OS, write only one driver, and the OS is responsible then for ensuring that the API it exposes has no bugs.

This is a better design than #1, and is preferable to hardware vendors, however, again a major drawback is the OS designers may not want to keep pace with hardware innovation, and yet again we have a problem with allowing the full capabilities of hardware; but at least we are getting somewhere....

Option 2, a different design of drivers.

This is particularly interesting to me, because we already have hardware manufacturers, especially all CPU and GPU manufacturers, adhering to APIs like OpenCL and OpenGL, and the more this matures the better it will become.  The benefit of hardware manufacturers agreeing on an API is that they can now drive hardware innovation and any OS can use that to further innovate through software.  But right now, we are still implementing this with the restrictions of OS-dependent APIs that the OS dictates to hardware.  In this sense, it forces people to actually program in these languages, and this is not being done to any major extent.  What we should want is to push for new standards in C/C++ that people then program in as usual....

Here's the implications and visualization for option 2:

Today - hardware <-> operating system (c/c++ base+libraries and impl) <-> drivers (hardware impl, c/c++ base, OpenCL/GL here), kernel mode programs (os base) <-> user mode programs (os full+libs for os)

As you can see, the driver stack here is backwards.  What this does is the following:

New - base hardware <-> (c/c++ base impl by standards body like boost - base kernel) <-> drivers for mb/gpu/cpu/apu/etc (hardware impl, OpenCL/GL here) <-> OS (libraries, security, impl hardware interaction, windowing system, other higher-level impl) <-> kernel mode programs, other device drivers, user mode programs

So what we can see is that with this design, we have the standards setting body create a base kernel for the c/c++ api.  Then the major hardware vendors (Intel, AMD, Nvidia) use that api to write their own kernel mode program to operate hardware properly as designed (brings out full capabilities, we are doing this anyway with drivers, only now again only 1 driver needs to be written and it is OS-independent).

What is nice about that scenario is AMD has shown that they already can abstract openCL/GL to a base c++ std library.  In affect, this layer of hardware drivers sits on top of the base c/c++ api's (or however it needs to work).

The other nice thing is now all the innovations of hardware is accessible to any operating system (linux, mac, windows), and they can choose what to implement (or in many cases according to Khronos they cannot control what hardware innovations are included into the OS) - this brings out the full intent of all major hardware vendors, something we are badly failing at right now - and on top of that, we are already doing this, we just need OS vendors to get on board (and no OS vendor or prg language is doing so...)

It also displays where our point of failure is in Option 3, which is similar to this: OS vendors will never agree on any API, instead claiming 'IP rights', whereas hardware vendors, including Intel, are already on board with this according to Khronos - hardware vendors now just need to take over base layers of this themselves, instead of writing OS-specific drivers.  Doing this might open up the OS market....

Then any OS can come along and build on top of the API's that are exposed by C/C++ standards working groups, the base hardware (main chipsets) expose an extension to that, which in turn exposes a base system for the OS to put the pieces together, and then implement their design.

Now, this only covers the MB, any chipsets, graphics cards, and CPU's or combinations - the hardware vendors are now free to implement their hardware or collaborate in any way desirable.

For other hardware, we have shown in windows it is simple enough for either the vendor to create drivers for (and I've seen complaints from vendors that there are no OS-independent ways of doing this driver stuff), or for the OS to build a driver themselves (linux has built their own drivers for a very long time now, and it is much easier with devices not on the main board).

However, if main board manufacturers can standardize an API for the various types of PCI, USB, SCSI, AUDIO, and other peripherals, they can probably expose yet another base API for the OS to implement, and further make the lives of hardware designers easier, so in affect once Khronos gets off the ground in the appropriate way, now we can get together and standardize the rest, since in affect it all plugs into the main board, and so the main board should already know how to use it's connections....

This option 2 is the best design, and I believe it is what should happen with the Khronos movement, simply because Apple dictates the hardware it wants to use and will implement nothing else, and Microsoft has a corner on the market with any API, locking software vendors completely out, as well as locking hardware vendors to cater to specific OS's...

Then option 2 is clearly the winner, and we are already heading down that road..  Thoughts?

0 Likes
dorono
Staff

Hi happyturtle,

This forum is dedicated to discussions about AMD CodeAnalyst - a legacy CPU profiling tool that has since been replaced by AMD CodeXL.

I think the proper place for your posts is the General Discussions forum here:

General Discussions

0 Likes