6 Replies Latest reply on Jul 2, 2011 9:15 AM by williammorgan27

    A different type of 'vision'

    Meteorhead
      DirectX become something new

      Hi!

      Forgive me for the longer post, but it takes a few lines to make my point. This is some sort of a suggestion, or just a word of thought how thnigs could look according to meóy ideas.

      I have read two different statements about graphics APIs hindering game development. One was from one of the spokespeople at Crytek, and the other was from Richard Huddy (link), developer relations manager at AMD. To summarize the issue was that developers (and you must assure me in this, as I am OpenCL developer mainly, just getting to know OpenGL) that in lead game engine development it is a hurdle that on a console over 6 times more API calls make it through per frame as opposed to the PC, not to mention that console SDKs allow low level access to the HW, which allows innovation in the graphics engine. These were the two arguments behind the slogen: make the API go away!

      Of course there is a need for APIs, just on a different level, somewhat lower. DirectX was created to offer a unified, vendor independant way of programming graphics. It was all nice, up until the GPUs got too strong for the CPU to translate and convey API calls to the graphics driver. Lower lever, less translation would allow more calls, and access to intimate HW features. What would happen if both AMD and NV would create a low-level graphics API similar to CUDA and CAL as in the GPGPU frontline. (Let's not talk about CAL being hidden from developers for a moment) OpenCL programmers all know that both CUDA/CAL are the means to access the vendorspecific HW features, and as such OpenCL is a port to CUDA/CAL, it is translated almost as it is to those features. Why not do the same with graphics? Why not create a similarily low-level API to access the GPU from a graphics point-of-view, and have DirectX and OpenGL serve as a port to these APIs? We have seen something similar some 13 years back and it was called Glide (by 3dFx). It was an API that was driven by the HW. If a new HW feature came, Glide gave a function to access it. NV purchased 3dFx, the owner of the Voodoo frontline, and along with it SLI architecture (which the basics was developed by 3dFx, a little history). Some years later NV released CUDA, which was similarily a vendorspecific API to access their HW, and it was a major success.

      If graphics developers would welcome the fact that they access low-level HW for graphics purposes, why not? Large developers like Blizzard, Valve, etc. could surely allow themselves to have two seperate render engine developer groups (which is always at most 5-10 people in a project). One specialised on the NV API, and one on the AMD API? Smaller studios who cannot afford so many programmers simply write DirectX/OpenGL code, which is somewhat suboptimal to the vendorspecific ones, but still performs well. (Same is done with GPGPU. If you want big magic, you use CUDA/CAL, if you want portability and good cross-vendor programming and compiler optimizations suffice, than you use OpenCL)

      It is somewhat annoying, how a consoles still running GeForce 8900 still look just as good as a PC game, with GPUs at ~5TeraFlops. This is a joke, really. OpenCL does kernel compilation once during init, and after not so many kernel calls are needed (4-10 max per frame/iteration) and depending on the use, realtime is not even a goal (scientific HPC). If one wishes to use OpenCL for physics and AI in a game, still 10 types of kernels will surely be enough, and that works just fine. DX API translations are done more often, and as such consume a lot more useful CPU/BUS time. I would really welcome games that look a whole lot better, and seeing new techniques that utilize the HW to the fullest.

      Now let me ask the developers: would this be a favorable course of events?

        • A different type of 'vision'
          nou

          yes it is joke. but it is not limitation of DX11/OGL. games today still live in DX9 days and we will not see better games only with new genration of game consoles.

          problem of todays game is that they are first maked for consoles and then ported on PC. they just add one or two efects from DX11 and that is all.

          we can be glad that there is some port to PC from consoles. with vendor specific API it will be even worse than today.

            • A different type of 'vision'
              Meteorhead

              I do not quite understand why it would be worse than today. If there would be vendorspecific APIs, than there would be no need to have console SDKs. Games 'ported' to PC would look equally good as on console, but it would run a lot better. But really, there would be no need to port, because the same code would run both on a console and on a PC.

              Edit: The same issue discussed on NV forum

              http://forums.nvidia.com/index.php?showtopic=196043

                • A different type of 'vision'
                  qwerty109

                  You usually develop the game for the lowest common denominator platform, especially when the main revenue comes from 360/PS3, not PC. For most games then you allow for higher res display/shadow maps, more of the same effects, better LOD ranges, etc for high-end PCs (while you still have to support old PCs in order to sell!). In some cases you'll sparkle in a couple of cheap-to-develop DX11 features, but that's more for marketing purposes then anything else.

                  Why would anyone invest money in developing high quality graphics (it's not just coding, it's also art and other stuff!) when the high-end PC install base is very small (http://store.steampowered.com/hwsurvey) and half of them will pirate the product anyway? :)

                  I'm not saying I like it, but that's just the way it is :)

                   

                  The other stuff about having vendor specific low-level access to HW on PC makes no sense at all - it's already much more difficult to develop for PC due to hugely varied installbase hardware; without DirectX/OpenGL it would be next to impossible to do anything, especially for smaller players.

                    • A different type of 'vision'
                      Meteorhead

                      I did point out, that large companies would surely develop for it, because they have the assets to do it. If you look at Diablo 2, it's engine still had Glide support.

                      Games do not have to look the same on AMD and NV. Models and art can be the same, shaders, anti-alias and all of that could be specific to a vendor. I admit that middle-class hardware is the most in terms of numbers, but for crying out loud: Middle class hardware is HD 5770 for eg. Where on Earth is that in terms of power compared to a PS3? It has triple the muscle. Even Mobility 6550 has more power than all consoles atm.

                      People pirate games because they are so damn expensive. If it would cost third the amount, I would buy all my games. Everybody likes the original boxes on their shelves.

                      Anyhow, back to topic: I do feel that there are many features that the hardware allows but I cannot use because lacking software support. (OpenCL and Global Wave Sync) OpenCL would let all of this happen with vendor specific extensions, but simply there are too many things to port and SDK developers seem not to be able to cope with all the wanted features. Since CAL is about to be "deprecated", we have no other choice but to wait for them to implement all these features. Btw, if you look at the sticky topic on the top of OpenCL, there are gazillions of features wanted. Global Data Share, Global Wave Sync, Pseudo-double precision, dual gpu support... just to name a few requested only by me. These features alone would enable me to rewrite a few algorithms I made and they would perform A LOT better.

                      Large companies could afford to write a proper interface to the render engeine and have whatever vendor-specific magic be put behind it. Smaller developer groups could program DX11+.

                • A different type of 'vision'
                  williammorgan27

                   

                  Originally posted by: Meteorhead Hi!

                   

                  Forgive me for the longer post, but it takes a few lines to make my point. This is some sort of a suggestion, or just a word of thought how thnigs could look according to meóy ideas.

                   

                  I have read two different statements about graphics APIs hindering game development. One was from one of the spokespeople at Crytek, and the other was from Richard Huddy (link), developer relations manager at AMD. To summarize the issue was that developers (and you must assure me in this, as I am OpenCL developer mainly, just getting to know OpenGL) that in lead game engine development it is a hurdle that on a console over 6 times more API calls make it through per frame as opposed to the PC, not to mention that console SDKs allow low level access to the HW, which allows innovation in the graphics engine. These were the two arguments behind the slogen: make the API go away!

                   

                  Of course there is a need for APIs, just on a different level, somewhat lower. DirectX was created to offer a unified, vendor independant way of programming graphics. It was all nice, up until the GPUs got too strong for the CPU to translate and convey API calls to the graphics driver. Lower lever, less translation would allow more calls, and access to intimate HW features. What would happen if both AMD and NV would create a low-level graphics API similar to CUDA and CAL as in the GPGPU frontline. (Let's not talk about CAL being hidden from developers for a moment) OpenCL programmers all know that both CUDA/CAL are the means to access the vendorspecific HW features, and as such OpenCL is a port to CUDA/CAL, it is translated almost as it is to those features. Why not do the same with graphics? Why not create a similarily low-level API to access the GPU from a graphics point-of-view, and have DirectX and OpenGL serve as a port to these APIs? We have seen something similar some 13 years back and it was called Glide (by 3dFx). It was an API that was driven by the HW. If a new HW feature came, Glide gave a function to access it. NV purchased 3dFx, the owner of the Voodoo frontline, and along with it SLI architecture (which the basics was developed by 3dFx, a little history). Some years later NV released CUDA, which was similarily a vendorspecific API to access their HW, and it was a major success.

                   

                  If graphics developers would welcome the fact that they access low-level HW for graphics purposes, why not? Large developers like Blizzard, Valve, etc. could surely allow themselves to have two seperate render engine developer groups (which is always at most 5-10 people in a project). One specialised on the NV API, and one on the AMD API? Smaller studios who cannot afford so many programmers simply write DirectX/OpenGL code, which is somewhat suboptimal to the vendorspecific ones, but still performs well. (Same is done with GPGPU. If you want big magic, you use CUDA/CAL, if you want portability and good cross-vendor programming and compiler optimizations suffice, than you use OpenCL)

                   

                  It is somewhat annoying, how a consoles still running GeForce 8900 still look just as good as a PC game, with GPUs at ~5TeraFlops. This is a joke, really. OpenCL does kernel compilation once during init, and after not so many kernel calls are needed (4-10 max per frame/iteration) and depending on the use, realtime is not even a goal (scientific HPC). If one wishes to use OpenCL for physics and AI in a game, still 10 types of kernels will surely be enough, and that works just fine. DX API translations are done more often, and as such consume a lot more useful CPU/BUS time. I would really welcome games that look a whole lot better, and seeing new techniques that utilize the HW to the fullest.

                   

                  Now let me ask the developers: would this be a favorable course of events?

                   

                  • A different type of 'vision'
                    williammorgan27

                     

                    Originally posted by: Meteorhead Hi!

                     

                    Forgive me for the longer post, but it takes a few lines to make my point. This is some sort of a suggestion, or just a word of thought how thnigs could look according to meóy ideas.

                     

                    I have read two different statements about graphics APIs hindering game development. One was from one of the spokespeople at Crytek, and the other was from Richard Huddy (link), developer relations manager at AMD. To summarize the issue was that developers (and you must assure me in this, as I am OpenCL developer mainly, just getting to know OpenGL) that in lead game engine development it is a hurdle that on a console over 6 times more API calls make it through per frame as opposed to the PC, not to mention that console SDKs allow low level access to the HW, which allows innovation in the graphics engine. These were the two arguments behind the slogen: make the API go away!

                     

                    Of course there is a need for APIs, just on a different level, somewhat lower. DirectX was created to offer a unified, vendor independant way of programming graphics. It was all nice, up until the GPUs got too strong for the CPU to translate and convey API calls to the graphics driver. Lower lever, less translation would allow more calls, and access to intimate HW features. What would happen if both AMD and NV would create a low-level graphics API similar to CUDA and CAL as in the GPGPU frontline. (Let's not talk about CAL being hidden from developers for a moment) OpenCL programmers all know that both CUDA/CAL are the means to access the vendorspecific HW features, and as such OpenCL is a port to CUDA/CAL, it is translated almost as it is to those features. Why not do the same with graphics? Why not create a similarily low-level API to access the GPU from a graphics point-of-view, and have DirectX and OpenGL serve as a port to these APIs? We have seen something similar some 13 years back and it was called Glide (by 3dFx). It was an API that was driven by the HW. If a new HW feature came, Glide gave a function to access it. NV purchased 3dFx, the owner of the Voodoo frontline, and along with it SLI architecture (which the basics was developed by 3dFx, a little history). Some years later NV released CUDA, which was similarily a vendorspecific API to access their HW, and it was a major success.

                     

                    If graphics developers would welcome the fact that they access low-level HW for graphics purposes, why not? Large developers like Blizzard, Valve, etc. could surely allow themselves to have two seperate render engine developer groups (which is always at most 5-10 people in a project). One specialised on the NV API, and one on the AMD API? Smaller studios who cannot afford so many programmers simply write DirectX/OpenGL code, which is somewhat suboptimal to the vendorspecific ones, but still performs well. (Same is done with GPGPU. If you want big magic, you use CUDA/CAL, if you want portability and good cross-vendor programming and compiler optimizations suffice, than you use OpenCL)

                     

                    It is somewhat annoying, how a consoles still running GeForce 8900 still look just as good as a PC game, with GPUs at ~5TeraFlops. This is a joke, really. OpenCL does kernel compilation once during init, and after not so many kernel calls are needed (4-10 max per frame/iteration) and depending on the use, realtime is not even a goal (scientific HPC). If one wishes to use OpenCL for physics and AI in a game, still 10 types of kernels will surely be enough, and that works just fine. DX API translations are done more often, and as such consume a lot more useful CPU/BUS time. I would really welcome games that look a whole lot better, and seeing new techniques that utilize the HW to the fullest.

                     

                    Now let me ask the developers: would this be a favorable course of events?