There has long been an ongoing battle on which operating system is superior to the other and with virtualization technology this battle is soon coming to an end. The truth is that no operating system is superior to the other. It is for example well known that Windows has some severe flaws at the low level when you look at things "under the hood" but it is unmatched when it comes to the abundance of software and computer games. It is also well known that ZFS which is found in Solaris based operating systems is a file system that is unmatched in terms of reliability and safety against data corruption, which is a growing concern as larger and more dense storage hardware has become less reliable in the past few years (many more hard drives have failed on me compared to 10 years ago). I'm very concerned about these issues and I can no longer trust a hard drive in a Windows environment to reliably keep my data. Linux has many advantages in terms of system resources efficiency and stability. This list of operating systems and their advantages/disadvantages can go on...
So why should I have to choose? Why can't I take advantage of all of these benefits from these operating systems and get the best of all worlds? The answer is that I can, by virtualization. In the past few years the world has seen exciting development in the Xen community and really powerful extensions that enhance the capabilities of a virtualization such as the Intel VT-x/ AMD-v and the Intel VT-d / AMD-Vi (IOMMU) have become widespread among desktop hardware whereas it has been commonplace among enterprise-level hardware for quite some time by now.
(url to full-size picture: http://img265.imageshack.us/img265/7963/multiplatform1.png)
So it is quite evident that the role of an operating system is going to change considerably in the future. The operating system that runs on-the-metal is going to become a simplistic hypervisor that manages simplistic virtual machines. The operating systems as they are today will shrink into so-called wrappers that merely supply the frameworks required to run a particular piece of software (such as .net, Visual Runtime etc).
So there will be a separation between the hardware and the operating systems by an abstraction layer where different wrappers (that used to be operating systems) share the underlying hardware with each other. There will no longer be a question whether you use Windows, MacOS or Linux. You just use whatever you prefer as a base OS and use whatever is needed to run the applications you want, which in reality could mean that you run several operating systems simultaneously on the very same machine.
This separation has already begun, ZFS is a good example of that. The ZFS file system looks at the hard drives as a storage pool and the user is not concerned with the physical characteristics of the partitions and where the sectors begin or end. I didn't like it at first but later found that this approach is ingenious. So I see it as a natural step that the rest of the hardware will undergo the same transition. I also think a lot can be done with the UEFI framework in this regard.
The latest advancement in the virtualization technology is the set of IOMMU extensions which allows virtual machines to run directly on selected parts of the hardware on the host. This means that I can run say, Linux on-the-metal while playing Crysis 2 on a virtual machine that runs directly on the GPUs. Here's a video showing Unigine Heaven running on a virtual Windows machine inside Ubuntu on a dual GPU setup:
This is called PCI passthrough where PCI slots are passed through to the virtual machine or VGA passthrough where also the VGA-BIOS mappings are sorted out. In another setup I may want to run Windows on-the-metal and pass through a whole hard disk controller to a Solaris machine where I run a secured storage pool with redundancy (e.g. raidz3). For ZFS to give proper protection against data corruption it is an imperative that it runs directly on the hardware and not through a virtualized abstraction layer. There currently is no support for IOMMU on Windows hosts but that will change eventually, our hopes lie with hyper-v, VirtualBox and VMWare.
However, there is a lot to be done and the purpose of my post in these forums is to address this. For PCI passthrough and VGA passthrough to work it is a requirement that the hardware supports function level reset (FLR) which is a feature that allows the hardware to be reset and reinitialized at any time on a running machine (i.e. at function level). FLR is standard on QuadroFX cards and nVidia supply patches that enable FLR on Geforce cards upon request.
Another issue is that current virtualization technologies only support passthrough of entire GPUs to virtual machines and GPUs can currently only be shared through emulation which makes it impossible to run applications that rely on hardware accelerated 3D (such as DirectX games). This situation is pretty much the same as where the virtualization was before the VT-x/AM-v extensions were introduced. The CPU instructions had to be emulated on the VM which severely degraded the performance on that machine. When VT-x/AMD-v came, virtual machines could be run directly on the CPU with almost no overhead at all.
So I would like to suggest similar extensions that allow the GPUs to be shared over several machines just like CPUs can be shared via VT-x/AMD-v.
So my suggestions in short:
Some advantages of using the technology I discussed above on a desktop computer:
I really liked the stuff that you've explained. I might misunderstand the basics of the system, but the Xen server video looked real cool and convincing.
I might have explained this in another topic (I'm not sure if it was here or alienware, or VMWare or another forum), that practically everyday HW can fit into arbitrarily small sizes today. Gaming laptops 6-10 years ago were being laughed at, today there are multiple classes of gaming laptops. More and more compute power can be locked into smaller and smaller sizes. I vision desktop computers becoming extinct in a few years time, because there is no need for such bulky equipment, when a top class notebook can satisfy the computing power needs of 99% of the population. Since I own a fairly powerful notebook, I've been trying create 2 full fledged machines from one, where I could play with friends in LAN, or let my GF or whomever else surf the web or check emails while I'm minding my bussiness. These latter are piece of cake, CPU-s can be shared easily, as you have said. Games however don't run that well inside virtual machines, not to mention that present day virtual machines (AFAIK) cannot select a GPU to use rendering, they use whatever is used for desktop rendering on the host.
It would be really nice, since Trinity APUs (not to mention future HW) will feature a powerful IGP, and let's say that the computer is also equipped with at least one more discerte GPU, why couldn't virtual machines use dedicated GPUs to render their own stuff? Let's say in case of a notebook, host system uses the IGP, and the guest OS the discrete GPU. Or let's say that in case of a family, where multiple people do gaming from time-to-time, it would be nice that if only one person is using the computer, he/she would have all the resources, all the GPUs, and if virtual environments are launched, they start to disconnect GPUs from the host and use it for themselves.
I know that doing this all dynamically is really not easy, and might require this FLR axero mentioned, however I think that this could be one future. Having a CrossFire system inside either a desktop, or a DTR, plus an IGP, I really see no reason why it couldn't be possible to use them for multiple systems running or paralell, or have them be merged for one OS to take.
Developing virtualization extensions to share GPUs across systems is the flexible way of sharing a CF system over multiple OS-s, since there is no need to seperate the cards (since CF is a very intimate joining of the GPUs), whereas PCI passthrough is the more efficient way of doing it, as it abolishes the overhead of a virtual driver that all display calls must pass. This is one of the biggest drawbacks of using graphics or GPGPU inside virtual systems (the latter being even impossible), that the biggest bottleneck in todays games are the APIs, and the virtual driver only adds to this bottleneck.
Tell me if my comment is completely off the table, or if it is relevant to what you were saying.
The possibility of using dedicated GPUs for virtual machines is where we are going right now (which should be evident from the youtube clip that not only demonstrates it but also provides instructions for anyone to reproduce it). I'm all for the capability to dedicate any type of (I/O) hardware to virtual machines. But for that to work we need the FLR capability on all hardware. FLR is no rocket science, but what needs to be done is to make sure that every vendor (such as Realtek, Via, Qualcomm, LSI, Intel, Marvell, ... ) include it on all their hardware by default which is what I'm pushing for in the previous post.
What I'm also pushing for is virtual extensions that allow for seamless sharing of any GPU between the host and virtual machines just as CPU cores can be shared. With this type of extension, the overhead is rather negligible and that's the beauty of it. The following link shows some benchmarks of virtualization:
So yes, there is a penalty to CPU performance when using virtualization, but the penalty is rather small and is likely to get even smaller as these hypervisors get tweaked over time. I think I can take a 5% performance hit on my GPU for sharing it with virtual machines. If I don't, I could fall back on VGA passthrough.
But what's interesting is how the GPU will be shared. A GPU is a little different from a CPU. What defines a "GPU core"? Whereas a CPU may have 4, 6, 8 or 12 cores a GPU may have thousands of cores and it is designed differently. So an interesting question would be how these cores can be shared efficiently between the host and the VMs. It would be interesting to know how "GPU scheduling" compares to "CPU scheduling" in terms of multitasking capabilities etc.
It would also be interesting how to design these extensions so that several machines can share the same computer screen (or screens on a multihead setup). Perhaps some shader units on the GPU can be dedicated for screen blending so that the screens of several virtual machines can be blended using dynamic alpha channels and 3D translucency effects.
I'm not familiar with the technical aspects of CrossFire or SLI but I assume that using several GPUs in a virtualized environment would result in a NUMA-like solution.
I know that AMD's FirePro V9800P is specifically designed for handling multiple virtual desktops but I don't know how it (the virtualization part) works, the technical infomation about it is not exactly readily available even though there is a product page:
It also seems that this GPU only supports Windows environments. If that's the case, then they are really shooting themselves in the foot.
I personally don't believe that we're likely to count out the stationary platform completely anytime soon. Unless there is some revolutionary technology lying in the horizon, a stationary system will always have considerable advantages over a mobile platform in terms of performance and expandability. But what the future holds, nobody knows.
If the infrastructure allows for it, we are likely to use cloud services. I think that's what Ubisoft is trying to explore with their virtualization team that put together the rig in the youtube clip. In this case a computer game may be its own machine and the hard disk image of that machine may stay in the cloud. But cloud services will always have web traffic, and latency working against them which will make latency critical services difficult to implement fruitfully. Perhaps a local network protocol that is enhanced for lower latency could allow for local cloud-like (thin)client-host solutions.
I don't see why it wouldn't be possible that simply inside virtualized environements simply the GPU scheduler is used to deal with display requests from multiple hosts. It is only a question of sorting out render targets.
Or the more strict / less flexible solution is device fission, which might also work for CF/SLI configurations also.
I think it's only a matter of programming, the hardware is already capable of solving this. (Although it takes a LOT of effort to implement something like this)
I would say that the situation today with the GPUs are rather similar or at least comparable to the situation with the CPUs back in the early 2000s. Sure also virtualization of the CPU can also be handled entirely by the software which has turned out to be a tremendously daunting undertaking. This ushered the development of hardware assisted virtualization. The concept of this is explained here:
also the following paper gives a good explanation on the GPU situation:
I really don't understand your question, this is not about a particular operating system.
This thread is about a technology that lets you run several operating systems at the same time on a computer that seamlessly share the hardware and the space on you computer screen.
I also discuss why it is good and the consequences this technology will have on future operating systems. It basically means more freedom for the end-user like yourself as well as increased opportunity for operating system vendors to gain or at least maintain market share.
But since this thread is aimed at technically-versed people, I am using technical terms that may be difficult for the "average-Joe" to understand.