4 Replies Latest reply on Jul 8, 2018 2:24 AM by richt

    When to rollover from GPU to CPU decode

    richt

      Hi,

      I've a decode library that uses AMF, using AMD acceleration, when do I know when the resources are max'd so that I can degrade gracefully to CPU cycles?

      My test Kavaria processor reports AMF_NOT_FOUND when I call:

       

      sts = m_pAMDDecoder->GetProperty(AMF_VIDEO_DECODER_CAP_NUM_OF_STREAMS, &supported);

       

      Do I read the remaining GPU memory apertures as a best guess?

       

      Cheers for any pointers.

      Rich.

        • Re: When to rollover from GPU to CPU decode
          elstaci

          Really have no idea what you are posting about. But found this at GITHUB concerning AMF: AMF/VideoDecoderUVD.h at master · GPUOpen-LibrariesAndSDKs/AMF · GitHub .

           

          Copied from above link. Not sure if this is what you are talking about:

          // Decoder capabilities - exposed in AMFCaps interface
          #define AMF_VIDEO_DECODER_CAP_NUM_OF_STREAMS  L"NumOfStreams"   // amf_int64; maximum number of decode streams supported
            • Re: When to rollover from GPU to CPU decode
              richt

              Thanks for the reply.

              I can link and call the function you suggest at run time:

              m_pAMDDecoder->GetProperty(AMF_VIDEO_DECODER_CAP_NUM_OF_STREAMS, &supported);

               

              However, as I suggest, the function returns AMF_NOT_FOUND.

              Note all other calls to Get/SetProperty seem to be working.

               

              Returning to the scope of what I'm trying to achieve is best done with an example;

              I'm looking to create a generic AMD interface for decode across all AMD GPUs

              On lower end AMD chips, I can get x2 1080p @ 25 FPS decodes,

              On higher end chips much more, say 10 decodes.

               

              For lower end chips, on opening the third decode, performance across all decodes will reduce.

              To avoid this, I need to know when to stop calling AMD for accelerated GPU decode and revert back to say a software library for decode.

               

              Hope this makes sense,

              NVIDIA and Intel have working functions for this process.

              I'm unsure why the AMF_VIDEO_DECODER_CAP_NUM_OF_STREAMS does not work.

               

              I do need to test on more AMD devices other than the Kavaria...

              Cheers.

                • Re: When to rollover from GPU to CPU decode
                  elstaci

                  your best bet is to open an AMD EMAIL SUPPORT Ticket.  They may be able to direct you to the correct AMD Forum that deals with what you are trying to do. Email Form

                   

                  Also this might be the best AMD Forum for you to get your answer : Devgurus . If you have problems posting your question. Post back here to let us know.

                   

                  Copied from : Devgurus

                   

                  WELCOME TO DEVGURUS

                   

                  DevGurus is a community of software developers using AMD technology, and industry standards supported by AMD.

                   

                  Developers can post in any of the forums within DevGurus. If you don't find a good fit for your topic in one of the sub-forums, then please put it in the General Discussions forum. The particular DevGurus space is for questions about DevGurus itself.

                   

                  If you are a newcomer, you can post only in the Newcomers Start Here forum. Here's why.

                   

                  DevGurus is a walled garden within the larger AMD Community.  The "Newcomers Start Here" forum is moderated, to protect our members from spam. A real human being will read your post to make sure it is appropriate for our developers. Once we see that you really are a developer, we will add you to the whitelist you so you can post anywhere.

              • Re: When to rollover from GPU to CPU decode
                richt

                Cheers for the direction, I'll try this route next, appreciated.

                rich.