3 Replies Latest reply on Jun 11, 2018 10:35 PM by wimpzilla

    Understanding XFR & Precision Boost in detail


      This week I have bought my first Ryzen (2700X) and I am marveled by how it works. XFR2 and PB2 are an amazing experience and technical feat. Reminds me a lot of how the power states of Vega handles and it is actually hard to make the chip unstable on purpose as with Vega (not that I have tried). However, I have noticed there are situations where the chip has prolonged exposure to 1.45+ volt. With prolonged I am talking 10+ minutes where the average voltage is 1.45+ volt. Now all information I have gathered over the years on CPU's say this is on the high side; especially for the stock cooler. I have looked at Robert Hallock's explanation of XFR2 and PB2 and it takes the power, temp & max clock variables into account. But the algorithm doesn't seem to take the time the chip is exposed to critical voltages into account. The situation I am talking about is a low load single-thread where the voltage for the entire chip is boosted to 1.45+ volts for an arbitrary amount of time.


      Now this discussion has two goals for me. 1) I want to check if the behavior I am seeing is intended and not somehow induced by my motherboard or a hardware/software fault. 2) To gain some insight into the expected/safe behavior of PB2/XFR2, how dependent it is on the motherboard and what "knobs" there are for configuration (if any).


      I think we can cover these goals with the following questions:
      - Is exposure to 1.40 - 1.50v VCore for the 2700x an issue?
      - Does XFR2 take time into account when exposed to voltages which are deemed safe only for burst times?

      - When XFR2 and PB2 are enabled, are there any bios settings of the motherboard that play any factor into the scaling of the voltages and the clock?


      Maybe it is impossible to say without revealing to many company secrets but:
      - How is the chip able to "sense" what voltage is needed in order for some frequency to still be stable? There seems to be some sensing as there are reports that the maximum frequencies achieved are different per chip when keeping the cooling and power delivery the same. So the chip is smart enough to find out what frequencies are safe for what voltages.


      Thanks for anyone sharing any info!

        • Re: Understanding XFR & Precision Boost in detail



          I would suggest you to check on HardOCP site, there are some nice articles about PB/XFR, very well explained.

          I will give you the short and honest answer imo: AMD followed more or less his competitor, providing a boost mechanism that is not, in any way, tunable by the user.
          There is no secret on how it works, the real secret lie into the fact you can't tune it, you can't modify it, for now.
          Hope it will become something that we can leverage and not a marketing product segmentation tool.


          In electronics, especially high end ones, it is not hard to sense the common physical parameters, T°, voltage, current, pressure.
          Having these physical parameters, it's quite easy to calculate everything else, building up an algo that manage the cpu clocks based on the ambient parameters.
          Also one already know more or less the maximum clocks and voltages the silicon die will achieve, gpu, cpu, fpga, whatever it is.
          The maximum clock, voltage are more or less based on chip architecture and silicon node manufacturing, so how complex and small it is.
          So in few words, one would already know almost all the cpu parameters on paper and could even predict how it will behave.
          We end up with these PB/XFR features applied to the cpu and they work very well tbh.


          So to answer to your questions:

          1.5v while boosting on a couple of cores is not an issue at all, it begin to be an issue if you pump 1.5v full load on all cores, for long periods, with a bad cooling.
          What PB/XFR do, is crank up one or a couple of cores to 4.3Ghz at max . For that clocks, ryzen+ architecture/node need more or less 1.45/1.5v.
          As we said before, the algo evaluate how good the silicon die is, using the physical parameters we discussed previously.
          Then choose a voltage that suit the silicon quality, for that fixed clock, to be stable relatively safe.
          The aim of these AMD features is to adapt to the current application landscape.
          Since the clock/arch define the IPC, in single or low multithreaded application the cpu would prefer boost higher clocks, with higher voltages, on one to four cores, for example.
          The rest is left in an lower state, since only these 1/4 cores on 8 are really used, they are indeed maxed out, so max performances output.
          Instead when a strong multithread application hammer the cpu, all the cores would be fully active, with the max sustainable clocks/voltage for 8 cores load, again max performances output.


          In few words, adapt to your software landscape to get always the best performances/cores ration, with controlled cpu T°, voltages, current parameters.

          1 of 1 people found this helpful