I run a Ryzen 7 5800x with a nuctua D15 on a ASUS B550-F. I'm not looking for any awards in over clocking but when i run test on the CPU I'm having an issue where on both cinabench and 3dmark I'm averaging 300 points less on single core and single thread then the average Ryzen 7 5800x, and on mult-core about 400 points less.
Edit: does it Matter I'm running a high refresh rate monitor at 2k and im running a EVGA 2070 Super and running 32gb ram at 3600 MHz
Find a benchmark from a review site and compare your results to them. People who test their hardware, might in average have better than normal hardware like finetuned b-die memory, watercoolers and whatnot.
My recommendation would not be to overclock, but first make sure you have XMP/DOCP enabled, then if your Asus has it, use memory profiles you can find under memory settings. If your model doesnt have these, maybe try running memory a notch faster, like 3733-3800MT/s and make sure your IF clock is half of that (1866 or 1900 respectively). If machine won't boot, unplug, clear cmos and try a lower setting. Also make sure you are running in dual-channel mode (see motherboard manual about which memory slots to use).
Make sure your CPU temps are good. Maybe not enough thermal paste or maybe not enough cool air to case. Try running with side panel open to see if you get better score.
After this, you could try negative curve optimizer. Run cine three times, write down score, drop curve optimizer, run bench again and repeat until you find out best average.
I am not a fan of pbo as it mostly just increases single core perfomance, stresses CPU and by looking at benchmarks in YT would appear to lower 1% results (increase variance, while average is higher)
Well, traditional overclocking would only help the multicore score. You can set a manual overclock and get better all core performance than with the boosting algorithm, but typically single core performance suffers. You can also activate precision boost overdrive and raise your PPT/TDC/EDC, but again, those are typically only bottlenecks in all core boosting scenarios.
What you can do, is try to play with "Core Optimizer" in the AMD Overclocking/Precision Boost Overdrive UEFI menu. Under Precision Boost Overdrive, you can leave the PBO limits at the processor defaults. Set the scalar to manual and to 2X. You can set the boost clock override to 200 MHz. From here, enter curve optimizer and enter a negative offset of -5 for each core.
Save your settings and rerun your tests. Run a longer stress test and make sure no WHEA errors or other instability pop up. If it is stable, you can then retest at a -10 offset etc. It can take a while to get it dialed in, but you should get better results over stock. After that, you could always increase the PPT/TDC/EDC settings since you have a fairly beefy cooler.
Can you shed any light about PPT/TDC/EDC settings and if there are risks involved.
I have overclocked CPUs and GPUs a long time and done some memory tuning, but while I have heard about these settings, I just don't get what they do or how to use them.
It is easy to understand that if you increase voltage of some part, there is going to be a safe upper limit. If you undervolt, you wont fry your component, but need to be sure performance doesnt drop. But what are these and how to use them and are there safe limits?
So PPT/TDC and EDC together define the TDP of a processor. A 105W TDP processor has default settings of 142W/95A/and 140A respectively. While a 65W processor is 88W/60A/90A respectively.
PPT stands for package power tracking. This is the maximum allowed work in wattage the processor is allowed to do.
TDC is the thermal design current. And is the maximum current in amps the CPU is allowed to sustain over the motherboard VRMs.
EDC is the electric design current. This is the absolute maximum current the CPU can use even in a short term spiked scenario.
When your processor boosts, it will attempt to reach Fmax (maximum frequency) within a fixed voltage envelop. However, it can be stopped before Fmax if it hits the max allowed temperature, or any of the other boundaries listed above.
The function of those settings is to give you an idea of the kind of cooler you'd need to run the processor. A processor allowed to do 142W of work on 95A will need a cooler that can dissipate 105W etc.
When you engaged precision boost overdrive, you can set the PPT/TDC and EDC limits to "motherboard" which places them at values so high that they essentially are no longer constraints. This is "safe" as the processor will still not exceed the voltage constraints or the maximum allowed temperature. Effectively, with PBO enabled, Fmax, voltage and temperature become the only bounds. Of course, most users don't like being at max temp all the time, so they set these bounds to something more reasonable. My 5950X is liquid cooled, so I set it to 203W/140A/207A. This is effectively a 150W TDP, but it is also the setting at which my CPU hits 70C on and all core load. I am completely PPT bound when running and all core load, which means the processor would gain more if I increased the limit, but at the expense of more heat. The extra performance gained going up to 85C doesn't really justify the heat generated
The Ryzen 9 7950X is a great example of that. It has a default TDP of 170W or 225W/160A/220A. But if you run it as a 105W processor (142/95/140) it loses less than 10% of its multicore performance. So those CPUs have been pushed way passed the efficiency curve right out of the box, and you can save big on temps and power draw by using the PPT/TDC/EDC settings.
Now, PPT/TDC/EDC usually don't affect lightly threaded workloads at all. If you are running a single thread or just a few, there won't be nearly enough wattage or current in play. Where they do matter is in all core boosts. Take a Ryzen 7 5800X, within the default TDP you will be able to hit a certain clock speed when all 8 cores are boosting. Now, the Ryzen 9 5950X will hit a much lower clockspeed. It has the same default TDP of 105W, but it takes alot more current and generates more wattage boosting 16 cores vs 8. So you are going to hit 142W and/or 95A at a lower clockspeed.
So the more cores you have, the more you will benefit from a raised PPT/TDC provided your cooler can keep up with the extra heat generated.
So like I said, for single threaded overclocking, those cut points don't matter at all, because you typically aren't reaching them even at default. You are usually just bound by the max voltage and Fmax. If you processor hits Fmax, it stops boosting. Or, if it hits the max allowed voltage. So curve optimizer allows you to shave off some voltage from the boosting algorithm. The processor will then attempt to boost to a certain clockspeed using less voltage than in normally would. As long as the voltage is still sufficient to maintain the clockspeed, the system will be stable. This generates some additional headroom to boost a bit further within the allowed voltage envelope.
You can also extend the voltage range by increasing the scalar. But increasing in much past 2X isn't really recommended if you want to make sure you maintain the life of your silicon.
Thanks for the explanation. This gives me new things to try.
One thing to note about Curve Optimizer is that it's dynamic. Meaning, it pulls away less voltage at lower clockspeed vs high. This prevents system crashes when the system is idle vs just a blanket voltage offset on the CPU.
Every 1 unit will pull away 3mv on a low draw setting, and 5 mv on a high draw. So a setting of -20 will pull away 60mv on lower load and 100mv on higher load. Well, single core workloads are also "low load". What if I want to pull away less voltage when the current is low and stay the same when it is high? you can set Curve Optimizer to -30 giving you -90mv and -150mv and then add back a flat postitive CPU voltage offset of 0.05V or 50mv. In concert with Curve optimizer you will now have -40mv and -100mv which is not a combination you could have gotten from curve optimizer alone.