Posting a few updates. So I reran my settings for the CCX specific settings. Here is what I had back in August of 2019.
While being aware of the possibility of per CCX clocking, I was a bit dubious about its practical use as it would involve targeting the applications to the cores/threads.
This is something I discussed how to achieve in my post:
There is no point in being able to clock a CCX higher, if the programs won't cooperate by actually using those cores//threads.
Another thing to consider is that by clocking some CCXs higher and some lower, you end up with a swings and roundabouts scenario with applications that create an all-core load, such as CineBench, Blender etc. What I mean is that what you gain with some cores you lose on others.
It was for this reason that I abandoned this approach pretty early on in my experimentation with 3rd Gen Ryzen after I got my 3950X.
With regard to not being able to notice a difference between 4.5 GHz and 4.4 GHz, I would suggest that you keep an eye on the temperature of the CPU, to see if that could be having an affect.
I do not think you can compare the two systems at all.
Too many variables.
Something could have happened to parts during shipping.
No real proof of how the friend in Sweden ran their PC.
Perhaps they really overvolted the CPU for months on end.
You have no proof.
You have a theory.
You want to prove yourself correct.
You create this post.
You control the entire outcome.
So you can say you are correct.
In a scenario where all the CCDs are equal, it is a complete waste of time. The only reason I looked at specific CCX settings back in August 2019 was due to the massive disparity in efficiency between the CCDs on my 3900X. Certainly it is possible that with better yields the CCDs on these chips are more similar now, rendering this sort of analysis obsolete.
My thinking was, why waste the bolus of your power budget, attempting to boost cores that simply do not boost efficiently beyond a certain point? It also largely mitigates the lightly threaded losses vs the boosting algorithm while still providing an increase in multithreaded boosting.
Hi, I also have experience of degradation by using default setting. I just found out this week. I have Ryzen 3600 and Asrock steel legend b450m. I built this pc in July 2020. I found its Fit Voltage is very low which is 1.22V. Then I test for overclock and can get 4.400 GHz at 1.212 V and I am very impress with that.
When I read in forum, many people recommend to use default setting because it is the best way to manage voltage and frequency. As I using my pc for various task, but not CPU demanding, so I use default setting instead of overclock. I also want to keep this PC for long time.
Latest I do overclock is 12 Sept (2 weeks ago). This is after I change CPU thermal paste and I want to test temperature on stock and on max performance. This time I still get 4.400 GHz at 1.212V. After that I revert to default setting.
Then this week I just try the overclock profile and cannot get 4.400 at 1.212 V. The highest I can get at 1.212 V is 4.300 GHz. The CPU degrade by 0.100 GHz within two weeks. And at the whole time between this duration, I only using default setting. It is very recent and I remember that I only using this PC now to play game - Sekiro and NFS Heat. Other than that, doing light work only. Average 4 hour per day.
If I never try overclock, I will never find out that my CPU has degrade. The performance at default is as usual and still work as advertise. Only the overclock performance is degraded.
actually Gigabyte Boards are "not" Standard!
to use Standard settings for Voltage and LLC you need to set Voltage to "normal" and "normal" && LLC to "Standard"!
Gigabyte, ASUS and MSI use their own settings for their "auto" - which are different from "AMD stock"