17 Replies Latest reply on Aug 10, 2009 11:44 AM by ryta1203

    Shader Performance!?

    ryta1203

      Overall, which shader mode gives the best theoretical performance?

      It seems from the docs that pixel shader mode currently (on current hardware) provides better theoretical performance, is that correct?

        • Shader Performance!?
          MicahVillmow
          Compute shader has a higher possibility of getting peak performance because it is not part of the graphics pipeline.
          For example, when you run a pixel shader, a vertex/geometry shader must be executed first in order to generate the pixels. So there is overhead involved and it requires resources. In compute shader, you basically get all the resources on the chip. Actually achieving that peak performance is another thing however.
          • Shader Performance!?
            MicahVillmow
            If you write in Brook code without using AMD extensions and using the older brook codebase, you can compile to a vast majority of graphics cards using the DX/OGL backends. With pixel shader mode, you can target all Radeon HD cards and compute shader can target all HD4XXX series and later cards. So yeah, compatibility is a reason.
              • Shader Performance!?
                ryta1203

                So does Brook+ find out what card you are using and compile into either ps mode or cs mode depending?

                  • Shader Performance!?
                    garrison

                    I have a HD4870 card. How can I use compute shader via Brook+?

                    • Shader Performance!?
                      Gipsel

                       

                      Originally posted by: ryta1203 So does Brook+ find out what card you are using and compile into either ps mode or cs mode depending?


                      No, it compiles to pixel shader code by default (just look to the created .h file) and switches to compute shader only if you use some compute shader features. The hardware dependent compilation is done by the brook runtime (calling the CAL compiler), but it does not change the shader mode.

                    • Shader Performance!?
                      godsic

                      to MicahVillmow:

                      What about OpenCL realization?

                      Capability or performance or maybe automatic determination of the best way of running?

                      Why only 4 generation of GPU(from X1XXX) ATI decide unhide the real face of GPU?

                        • Shader Performance!?
                          riza.guntur

                           

                          Originally posted by: godsic to MicahVillmow:

                          What about OpenCL realization?

                          Capability or performance or maybe automatic determination of the best way of running?

                          Why only 4 generation of GPU(from X1XXX) ATI decide unhide the real face of GPU?

                          I think it is because previous hardware don't have FP32 capabilities