Showing results for 
Search instead for 
Did you mean: 


Journeyman III

My Ryzen 5 5600X and it's high temperature issues.

Hi everyone,


So I've recently built a new PC with the AMD Ryzen 5 5600X as my CPU but I've been struggling to find reasonable temps for the CPU. I downloaded Ryzen Master to assess the temperatures and found that normally, my CPU idle's at the 30°C (sometimes it spikes to idling at 50°) and when I ran Cinebench R23, it went up to 95° extremely quickly, and I felt like I had to stop the test before I damage my CPU. 


I recently bought this, only about two weeks ago. 


I have tried reapplying thermal paste by taking off the cooler, cleaning both the cooler and the CPU's IHS using 99% isopropyl alcohol like recommended. I added a "grain of rice" of thermal paste, and attached the cooler evenly by screwing it in using equal intervals of rotation. 


I was playing Valorant today morning and I peaked at Ryzen Master to see how it's doing and saw my CPU at temperatures like 88° which I believe is definitely a bad thing.


I feel like I've tried everything, and reapplying thermal paste seems to not be working. What do I do?  



CPU - R5 5600X

Motherboard - MSI B450 TOMAHAWK MAX

CPU Cooler: Wraith Stealth Cooler (STOCK COOLER)

Case: Fractal Design Meshify C Black ATX Mid Tower

(if I need to any other specs, please let me know and I will add it right away)

75 Replies

I've noticed same problem with similar CPU (5700G). A motherboard by occasion is also by Gigabyte.
Because with the stock cooler from the box a top temperature reported by a software tools was too high - up to 85-90*C, i had to install some old copper cooler by IceHammer that was 'under the hand'. With that old IceHammer cooler top temperature is about 10*C lower than with the stock one, which is still strange because in old times when this cooler was used with old 95W CPU's their top temperatures barely reached 60-65*C.

So, i decided to run a stress test and to measure a temperature right on the CPU's lid (using a thermocouple with a corresponding graduation table and a voltmeter, which i believe gives precision at least +/-1*C that is enough in this case). After 10 minutes of stress test running, temperature on the lid appears to be only 50-52*C, while any software tools report CPU temperature 75-78*C.
Besides, i've noticed integrated GPU has another thermal sensor, and that thermal sensor via the software tools reports almost the same temperature as the temperature measured on the CPU lid.

Another method i tried was to check the temperature right after exiting from S3 sleep mode. After half a hour in S3 mode, CPU temperature reported by a software tools appears to be 40-42*C immediately after switching to normal mode, while air temperature in the room is only 25*C, which is physically impossible (considering a copper cooler weight is over 0.5 kg) unless there are some issues existing...

From this i make a conclusion: either all the software tools used report wrong temperature, or there's too high thermal resistance of the thermal interface between CPU die and the lid.

I suspect, it is highly likely a thermal interface issue, because when i cause an integrated GPU high load and no CPU load, both CPU and GPU temperatures reported by a software tools appear to be almost the same (50-52*C).
This also means there's no issue with thermal interface between GPU and the lid.
I don't have information if this GPU is a standalone die or a single die together with CPU. If it is one single die, then this supposition is incorrect.

So far i have no clear understanding why do this issue happens. Maybe somebody else have ideas and can share results of another tests/experiments that would help to figure out what is the issue...

I started to think this exact CPU sample that recently has come to me has the defect, but then i found this forum thread here and many other related information on another sites, that makes me think it is some common issue...

"or there's too high thermal resistance of the thermal interface between CPU die and the lid."

I fully agree, the cores cannot get the heat out. I am not able to run 4.7Ghz because that needs 1.200 Volts, which runs the cpu too hot under stress tests. In prime95 it shoots so quick up, the thermal-limit of systemboard misses it, temperature continuous up to 100 when the cpu shuts itself down.  A 5600x only uses about 115 watts under these conditions, thatway more than the TPD; That is why I installed the 360MM AIO wtercooling which is capable of about 300+ watts to dissipate. 

Exactly as you state, the cpu cores can't get the heat into the heatspreader. So i am left at 4.6Ghz and undervolting, CPU idles at 32 celcius and peaks at 58. Prime95 instant crashes the system. Currenly i am rather unhappy with the ryzen generation:

- A-brand, certified memory for the board ddr4 memory cannot be run at spec (3200mhz)

- Cooling problems

- 1st 5600x cpu , total unstable at any settings, 2nd cpu still instable but better than first one,
      tried different A-brand system board also not fully stable (4.6ghz) unstable at bios defaults.; Full bios defaults is somewhat stable at great loss of performance and extreem hot cpu underload, since defaults volt up to 1.4+volts in attempt to get it stable, which not really works. unstable cpu=unstable cpu no matter what you try.

So, nothing else to do. As a home-user, i do not have the resources to keep on buying different combinations and returning stuff.




Not applicable

why the heck is 65 watts CPU using 100 something watts of power what kind of idiot are you? TDP is how they build the heat sinks and cooling fans and cases and stuff. you almost never ever reach TDP.. the TDP is 65 watts its the upper most thermal limits possible that can ever be measured for it to reach based on the power a 5600x uses. so they design the cooling system for a 5600x based on the fact that it wont need to go higher than a thermal heat temp calculated theoretically to be 65 watts based on the max power you can juice into the thing physically possible whether PBO or overvolting or whatever you're just basically setting it to cap out at the TDP. because its based around what the chip can possibly function at max not at what it normally does.

Look up space heaters and heating elements and how hot they get vs how much watts you put into them. then look at your 5nm microns hair thin sliver of silicone oven baking tray and metal designed with the tiniest bit of resistance covered in billions of transistors. the things not designed to be a heater or cook stuff. its designed to be power efficient and not to explode in flames if it ever gets hot it simply lowers the power intake. its got POWER STEPPING. or power auto regulation.. its like a thermostat and it can also ramp the fans up to spin faster if it thinks its warm? you do have fans dont you?

Say you've got a car and your dash has a dial that goes to 230km/hr.. and the engine temp has a dial too that goes red at the very top the redline. over 230km and at the very end of the redline is where the TDP is calculated from for the CPU? understand? 65watt TDP. when driving for your groceries or in the mountains or national parks or the race track when would you reach 230km/hr and with your engine redlined completely to its max limits for any long duration of time?

in the mainboard bios leave the damned socket type voltage power thingy setting that lets you select between 30 or 45 or 65  the hell alone if you dont know what the words wattage means or voltage or any of that stuff?

also NEVER SET IT TO ECO MODE unless youre assignments homework on a laptop. if you want to overclock dont instead enable PBO .. PBO isnt technically an overclock it just maintains a boost clock more often rather than a base clock. It can be set to raise the boost clock a bit higher too depending on whats optimal for your chip and temps and stuff. you use air cooling for your 65 watt CPU.. the most powerful 5950x or whatever chipset uses maybe 208 watts of power was it?  and thats probably fine to aircool a 5950x. its a different chip layout and core design with MORE CACHE which runs hotter and uses more power in 5950 vs 5600x. 

your CPU should be probably about 50 to 60 degrees tops with even cheap offbrand aircooling. 

water cooling is dumb and stupidly expensive and requires absurdly more maintenance than just a brush off or compressed can of air or something maybe once a year and barely changes temps by like 5 degrees if your ideal fantasy of hundreds more for possibly 10 degrees difference means the world to you when the hardware is silicon and metal like what your OVEN IS LITERALLY BUILT FROM and your silicone baking trays.. it has temp warnings at 80 or 90 degress usually or closer to 100 degrees. and you stupidly think that the AMD phone adreno rx630 or rx780 GPU's which ray trace 3 kinda of rays and in like trillions or octillions of resolution that are more powerful than ps4 pro or xbox one X (those are rx480 at best) and the newer RDNA 2 ones perform same as a ps5 because rendering twice the resolution of genetics DNA is what RDNA 2 means can run cool enough for you to hold in your hand with a thin half straw width heatpipe of copper.. air cooled? you think its supposed to run hot?

you see the ADVANCED MICRO DEVICES shrinks the micron die size smaller and is MORE EFFICIENT. it isnt some trash 80's ancient garbage of 50 year old no copyright tech being renamed and third world educational free open source stuff and university kids assignments getting stickers like intel and nvidia on them. its a company that makes advanced micro devices and is far more efficient. look at the AMD phones snapdragon 888 was 40% more efficient and less battery use than snapdragon 865 right? all it did was shrink it a bit for the most part. 5nm instead of 7 or 8 was it?



@Anonymous wrote:

why the heck is 65 watts CPU using 100 something watts of power what kind of idiot are you?

Ok, breath deep.  First of all, "TDP" is the theoretical thermal output of a CPU (the T is literally short for thermal) not the power usage number.  Heat is a factor of inefficiencies and losses more than an actual function (the function of a CPU is to perform logic operations, not heat, thus the heat is actually a function of inefficiencies turning some of the power coming in into heat.)  Thus a CPU can pull more than 100 watts of power and produce less than 100 watts of heat.  And yes, you can actually measure the actual usage.  Set your videocard to full economy mode then run something like Prime95 and measure the before and after values using a meter device such as a "Kill-a-watt" or a current meter (though be careful measuring voltages like that with a multimeter and be extra careful because you'll have to bypass the fused circuit.)  I did not measure precise values on mine (that was outside of the scope of what I was doing) but I can tell you that Core Temp is at least in the right ballpark for its CPU power usage estimates.

TDP is how they build the heat sinks and cooling fans and cases and stuff.


Well, not exactly.  TDP is a guideline that basically says "you need a cooling solution that can dissipate this level of heat production" and even then it's a very loose guideline.  (For example, have you noticed TDPs tend to be listed only as a small set of specific values like 15, 65, etc?  If it were even remotely exact you'd see numbers outside those specific values like 67, 73, 82, etc etc.)

It was never exactly terribly exact even before, but with the Ryzen 5000X series it's further off than ever before.  My best guess is the TDP listing is based on the base clock rate of these CPUs with the thinking that thermal throttling will push them down to that eventually anyway so it's "close enough."  Whatever the reason for the inaccuracy though, the fact remains that it does not fit real life scenarios for the intended purposes of this line of CPUs.  (After all, if we were just looking to browse the Web and watch Youtube videos a Ryzen 3 would be the only sensible choice.  We got these things for performance in gaming/encoding/whatever and that, by definition, means they will be pushed at times.)

Look up space heaters and heating elements and how hot they get vs how much watts you put into them. then look at your 5nm microns hair thin sliver of silicone oven baking tray and metal designed with the tiniest bit of resistance covered in billions of transistors. the things not designed to be a heater or cook stuff. its designed to be power efficient and not to explode in flames if it ever gets hot it simply lowers the power intake. its got POWER STEPPING. or power auto regulation.. its like a thermostat and it can also ramp the fans up to spin faster if it thinks its warm? you do have fans dont you?

7nm on this series.  And you're right in that these are no space heaters (who said they were??)  Which made your earlier statement all the stranger.  You claimed they weren't using anywhere near to that many watts of power, but then that would mean the actual loss to heat is much closer to 1:1 or even somehow higher than it should be (as if they were heaters!)  It's absolutely true that modern CPUs have significant power efficiency and energy loss mitigations in place and that's why it can hit such high power usages and produce so much less heat.  For example, if a 5600X uses, let's say 125W maximum under its stock settings, then 65W TDP would mean only 52% of the energy it's using ends up lost to heat (basically half.)  Maybe not great, but of course there are far worse things.

And yes, thermal throttling and fan speed adjustments occur.  But I think you skipped over a lot of the revelations of this thread.  As people actually using these actual CPUs in actual real life conditions have, unfortunately, discovered, those aren't sufficient for it with the stock HSF.  The CPU hits that thermal limit (which already isn't exactly great for it as it's only just a little bit below the critical damage point) quite fast under many real life conditions and begins that throttling and fan speed ramping.  Thus you get a CPU that leads one to believe from its official specifications and benchmarks that it can perform a certain amount significantly less than expected because, after a not even very long period of time, it has reduced significantly from where it was.  In my own testing I saw it drop down to barely over the base clock in heavy usage on the stock HSF.  What's more, even putting aside the performance expectations, the fan on these things is LOUD.  Perhaps you're one of those people who hides your PC inside one of those enclosed desks and lets it run super hot, but for most of us when that fan is ramping up not terribly far from our ears it's quite unpleasant to listen to.


Say you've got a car and your dash has a dial that goes to 230km/hr.. and the engine temp has a dial too that goes red at the very top the redline. over 230km and at the very end of the redline is where the TDP is calculated from for the CPU? understand? 65watt TDP.


A flawed metaphor at best, but your comparisons are wrong.  The redline on the tachometer indicates the maximum RPM it could run before damage starts occurring to the engine, so in this metaphor it would actually most closely correspond to the maximum temperature (95C in these models) with a RPM limiter set to a slightly lower maximum (90C in these models.)  The speedometer going up to a specific number means absolutely nothing one way or the other (usually they go much higher than a car can actually handle -- even with aftermarket mods as the max is often enough down to components like the transmission/etc you can't do much with -- but sometimes they actually are lower than the theoretical maximum -- especially in the case of speed limiters) but if it corresponded to anything it would be the clock rate with the theoretical maximum speed therefore being the boost clock.  (Though, no matter how you look at it, the metaphor really falls apart there unless it's some kind of constant RPM transmission.)


when driving for your groceries or in the mountains or national parks or the race track when would you reach 230km/hr and with your engine redlined completely to its max limits for any long duration of time?

There the metaphor REALLY falls apart because that is exactly what these CPUs actually do.  They go straight to the maximum speed and hold it as long as they possibly can, only lowering it when heat forces them to.  For your metaphor, you really are getting the groceries with the engine redlined the entire time until heat forces you to let up -- and even then you only let up just enough to keep the heat from blowing it while the (much too small) radiator struggles desperately to catch up.  Definitely not something anyone remotely sane would do in a car, but CPUs work very very differently and the metaphor just falls apart here.


in the mainboard bios leave the damned socket type voltage power thingy setting that lets you select between 30 or 45 or 65  the hell alone if you dont know what the words wattage means or voltage or any of that stuff?

Skim the thread you replied to.  No, I'm not saying read it.  I get that you're not going to.  Skim it.  People are adjusting voltages for a reason.  And by the way, just a point of clarity:  LOWERING YOUR CPU VOLTAGE WILL NOT DAMAGE IT

Yeah, I know what your immediate response to that will be:  "why doesn't AMD and every other CPU manufacturer ship them with lower voltage settings then?"  Unfortunately, the short answer is simple:  because it's cheaper not to.  The long answer is CPU batches vary wildly within certain accepted ranges and they must ship with a default that works across the highest percentage that is reasonably possible.  (I'm not sure what rate they use, but a tolerance of 95% is generally considered the most commonly accepted in most mass production industries such that roughly 5% are discarded at the factory.)  The defaults set automatically have to Just Work(tm).  The process we're going through here is much more involved where we actually test different settings for stability and reliability in a long process that does admittedly require a bit of effort, but the end result is the CPU uses less power per clock and thus produces less heat.  (So it's actually a win-win when one can put in the time and effort.)


So I counteract your rather extremist claim of "absolutely under no circumstances touch the voltage options" with "anyone who reasonably can do so actually probably should."

your CPU should be probably about 50 to 60 degrees tops with even cheap offbrand aircooling. 

 You...  didn't even skim the thread the tiniest bit at all did you?  The stock HSF on the stock settings is NOT maintaining 50-60 tops on these lines of processors.  AMD themselves says it shouldn't even...  Also, it has become blatantly obvious that you do not have a Ryzen 5000X series CPU and are just guessing based on different chips because if you ever looked at temperatures at all you would clearly see that is not what happens with the stock boost settings (as you insist upon.)


water cooling is dumb and stupidly expensive and requires absurdly more maintenance than just a brush off or compressed can of air or something

Modern AIOs require zero maintenance -- merely having to be replaced after roughly two years.  I will agree that we shouldn't have to use water cooling on these (whether AIO or not,) and I'm using air cooling on mine, but I can absolutely understand why people might want to just not deal with it and go straight to an AIO.  (Plus I can imagine an owner of, say, a 5950X might actually need water cooling to keep up if they do really heavy stuff like encoding or something.)



BTW, one point of clarity on all this that I don't think anyone has been properly taking into consideration.  We're sort of trained to think of the boost on these processors as being the true stock speed with the base clock actually being an underclock for efficiency.  This isn't true though.  The boost is basically an overclock that just is officially tested and supported.  Remember the voltage curve it ends up having to use.  For example, as I said with my own CPU, at 4.3GHz it only requires 1.10625, but then at 4.4GHz it jumps up to 1.1625 and then at 4.5+ I think it's close to 1.3V (I stopped testing at 4.5 because it already pushed the full load temperatures up dangerously high as far up as I tested it and I need my computer to be able to safely do things like encoding.)  That is a very significant curve...  You see voltage jumps like this as you get closer to the edge of what the silicon can actually tolerate (at least without liquid nitrogen cooling or something.)  In effect, AMD has overclocked these processors but not upgraded their cooling.


Found some info here with photos:

A photo of the delidded Ryzen 5 5600X:


Also, related thread here:

I think to try a cooler that would have a thermal contact with entire surface of the processor lid. Theoretically it should reduce the CPU die temperature in comparison with coolers that have relatively narrow thermal contact with the lid (such as an old IceHammer cooler i used that takes the heat effectively from the central part of the lid only, and doesn't even have any thermal contact with the corners of the lid at all). Later will post an answer here if it will resolve an issue.

Bear with me a sec.  I have a reason for this thread necromancy.  Tl;d[n]r though, maybe other 5600X users with high temperatures might try a fixed CPU speed -- it may or may not help.


Anyway, I just bought a 5600X as an upgrade for my old CPU.  I specifically chose the "X" version because I wanted something that would last me a long time and I'm hoping the overclocking leeway would help me to keep it at least tolerable down the road.  I did not google first (and even if I had I wouldn't have thought to search for this exact thing -- this is not something the end user should have to deal with for a stock CPU without overclocking!)  I started to get extremely concerned because I was actually seeing temperatures in the lower 80s under really heavy usage (mostly Prime95, but rarely for a few moments in gaming) and in Prime95 it could actually even go up to 90 and stay there if I didn't stop it.  I suppose I have an even hotter 5600X chip considering that it was holding the thermal throttling limit there.  I followed advice from this thread and it helped a bit, but still would go over 80 in Prime95.  I was even able to actually set a curve optimizer all the way to -30 which I think was stable maybe, though I ran into a problem (I forget what happened and I think it may actually have not even been the -30 at fault) and just set it to -25 rather than test more thoroughly because, frankly, I was afraid to actually do any real testing since temperatures stayed so high in actual testing.  (Though, that said, we need a modified version of a CPU stress tester with verification like Prime95 does that can force some sort of CPU usage limitations so we can test different frequencies since the point where low voltage could turn unstable could vary quite a bit potentially.)


I finally gave up and replaced my HSF.  I didn't realize how awful the stock cooler really was!  My previous CPU kept cool enough with it that I never felt a need to replace it and I didn't realize how little actual metal was physically in there.  When AMD first started giving us these Wraith coolers they kind of touted them as being super good at cooling and remaining quiet at the same time, but now I've come to realize it was really just the CPU I had at the time simply being that efficient.  There's less actual aluminum in that thing than stock coolers for earlier generation CPUs (I'm pretty sure I had an old Athlon Palomino that had a larger stock HSF even and that was back when the TDPs we see today weren't even thinkable) and it just relied on a really big fan to handle the full load I guess.  Anyway, now that I can actually tweak and not be terrified to actually test my CPU, I'm definitely getting some better results.  It's actually able to hold the full 4.6GHz speed on all cores for far longer, whereas with the stock cooler it would drop down to around 3.8, then as low as 3.65 or so pretty quickly in Prime95.  Temperatures still are a lot higher than really where I want to see them (I kind of want this thing to last me something like 5 years and I know it won't be exactly great by then even with overclocking, but I need to be sure it won't be dead) but the absolute max I saw was about 81C and only for a short period of time.  I guess in the end the wraith cooler just simply is not acceptable for the 5600X.  Honestly though, Core Temp is showing quite a large amount of power usage and I think maybe the 5600X is maybe actually incorrectly classified as a 65W TDP when it is, in fact, really a lot more.  (Especially at 4.6GHz which I think is something like 120W power usage...)


But I also did something else that might be more relevant here and this is why I'm bumping this really old thread because maybe it will help someone (if not any previous posters, perhaps future searchers.)  One thing I came up with on my old CPU was setting a fixed all core speed.  That made more sense -- especially on Zen 1 -- with the old CPU and its single core only turbo boost, but I felt like, overall, games performed more stably and generally better with an all core setting (again though, that was with the other cores effectively overclocked by a significant margin.)  Still, I thought I'd try that with this to at least get it generally more consistent.  Strangely enough the temperature shot way down!  I've done a few tests back and forth and ultimately settled on 4.3GHz as a fixed rate (which is still quite good anyway -- especially given that the stock HSF had it going down to around 3.8 or so for any sustained operations.)  Idle temperature after a while is 24C (down from upper 30s or so before) and the absolute maximum small FFT test still only got up to 79C (only down from 81C or so) despite its inability to downclock (the funny thing is it was going down to around 4.3 or so after a while of Prime95 anyway.)  I think the average Prime95 went down from something in the upper 60s down to the lower 60s (hard to give exact numbers because it still varies quite a lot across tests and even varies a bit during each test.)  I can't really explain why setting a fixed speed even seemed to lower the idle temperatures, but I did see a significant drop almost across the board with only the maximum being about the same (and it takes longer to reach that maximum then drops again fairly quickly, so I think normal usage will never actually hit that in even heavy gaming.)  This is with the new HSF, not the stock one, but obviously less heat on this would be less heat on that (and I think maybe more extreme results since basically the stock HSF is just plain not keeping up with the 5600X.)


Anyway, long story short, if it's just really really bad for you, try setting a fixed speed.  I don't know if this may just be a YMMV sort of thing (certainly I'd be interested to hear if anyone else sees the same result I did with a pretty noticeable drop even at the bottom end) but I saw such a strangely noticeable drop right off the bat that I really think there must be a bit more to this.  Since the stock HSF has it drop down to something like 3.8 when under a consistent load for any lengthy period (eg playing a game) it's probably not going to end up performing worse over time at least (obviously you lose that initial burst jump to 4.6 down to 4.1 or so before it finally settles around 3.8-ish versus just straight to whatever, but it may be worth it.)  I believe this bypasses stuff like thermal limits and most (or all?) of the PBO stuff though, so watch your temperatures during testing if you do this (well, I think it's still supposed to basically just shut off if it hits the absolute max, but you don't ever want to go there anyway.)  Hopefully it works and helps someone.  I can't really explain it though because until I set a fixed speed the cores were all actually staying around the same speed as each other (none of that one core only boosting from Zen 1 here) and ultimately I settled on about the same speed as where they were going down to after a while anyway, but I swear even the maximum temperature is ever so slightly lower.  I haven't even tried tweaking voltages down a bit (I guess by setting a fixed speed I'm bypassing the PBO stuff, including the curve optimizer, so I need to try this later when I have time for the battery of tests it will need.)


EDIT:  Oops!  I had made my guess about where it would be for gaming based on the Prime95 lows, but it seems games are currently using this CPU so little that this estimate was way off!  It might someday hit 61 in gaming, but it seems today it's 39-41C depending on how heavily the game uses my CPU...  That is...  very acceptable to say the least!  I guess this CPU's heat production varies very very strongly and at that range it actually probably is around 65W TDP.

Adept I

I've had to limit PPT to 93W for load temps to remain under 80c. Otherwise with the default PBO settings it was going above 95c on Prime95 with small FFTs. That was scary.

I've since re-applied the paste and manually set PBO limits to:

 - PPT=93, TDC=61, EDC=90
 - CO on per core basis at -13 to -30 (Started with -30 each core and ran Prime95 small FFTs to find errors on each core. So far tested stable on small FFTs for 6hrs at 80c, and OCCT core switching for 1hr)
 - No boost override
 - Scalar: 1x
 - Thermal throttle limit: 90c

Mobo: Asus B550-F (Bios 2404)
Cooler: Scythe Mugen 5 Rev.B with as5 paste
Case temps: 23-26c
CPU idle temps: 34-40c
cb r20 mt score = 4430

When I tried setting PBO to motherboard limits it was hitting PPT=103, TDC=62, EDC=118 on cinebench and prime95 - according to HWinfo.

Like others have stated maybe the single ccd of the 5600x (smaller footprint for heat dissipation) is the limiting factor?

This cooler was previously doing a good job of keeping a overclocked lga 1366 200+ watt chip under 80c.

Edit: Stock limits was hitting 65c load. (ppt 76, tdc 60, edc 90)
Edit2: Trying ppt 96, tdc 56, edc 80 yielded the same cinebench scores (4430 on r20 mt) with marginally lower temps even though it's hitting 95 watts in HWinfo. Will need time to check if it's stable with the current -ve co curve.
Edit3: edc 83 now, 80 forced a restart between cinebench runs. tdc 56 seems to limit prime95 temps to 76c. Seems stable after a 3-4 +ve curve points on two errorneous cores.


This is my first post here and please bear with me as i am a beginner in the world of computers , I have just finished my second built , when i was using windows 10 i did notice my temp was around 30C up to 40C idle but since i upgraded to windows 11 now my cpu temp is sitting at 50C to 55C idle and i am not sure if this is a coincidence or something else . 

For your info i have ryzen 5 5600x , motherboard x570s pro ax REV 1.1 from aorus , cpu cooler ATC800 aorus , Fans Lian li front three , top two and back 1 , graphic card aorus 3070 gpu temp idle 42C.

The paste used the thermal paste provided with ATC800 cooler , now my question could it be windows 11 ? If yes what should i do to remedy the issue ?

If it's not windows 11 is there anyway i can lower the Temp to at least 40C idle ? If yes and if it is true the bios , is there any possibility to provide details on what should i change . Thanks.


Note : Bios ver F4A

First of all there should be a question - how do you measure a CPU temperature? In what part/point of the CPU this temperature is being measured, in what conditions exactly, what type of measurement is this that provides such values (peak values, average values, etc.), what are the parameters of filtering and so on...

Of course, the only reliable way to measure a temperature of the CPU is by external verified temperature measurement device. If no such device is available, you may only use some empirical/indirect methods in order to figure out at least roughly in what ranges (let's say with "precision" +/-5 celsius degrees) this temperature supposedly may take place.

If you use some kind of software tool that interprets/transforms the data taken from CPU/Motherboard built-in thermal sensors into some values represented as celsius/farenheit degrees, then you need to make sure (to verify) if these values reported are more or less correct.
Depending on the motherboard, these values may represent actual temperatures taken from the points corresponding thermal sensors are physically locating in quite correct, or may have nothing to do with actual temperatures in those points at all.

E.g., after a set of tests with several motherboards by Gigabyte based on 500 series chipsets, i've come to a conclusion they all have serious issues in this regard. Further investigation of this problem has shown there's no simple solution of this problem.
In simple terms - when you have "cold" CPU (let's say 30 celsius degrees), software will report 50 degrees or more. And when you have "slightly warm" CPU under load (let's say 50 celsius degrees), software will report 80 degrees or more.
It would take a lot of text to explain why does this occur, also it's practically useless information for any regular user.
So, i'd just refer to official AMD statement that reported temperatures up to about 90 celsius degrees is just normal thing for Ryzen processors, and there's nothing to worry about.

To decrease temps, some ways in Bios that are helpful to change is as followed (Every Motherboard has different settings thou):

Per Core: Start with -21 and work up to Negative -28 (All Core) 

PBO Limits: Manual

PPT: 80W

TDC: 60A

EDC: 90A

Thermal Throttle Limit: 80

For Ryzen 5 5600x (As I have one and use one too currently) 



Thank you all for your response , the culprit was Nvidia Nvrla.exe causing a spike in my CPU , for now the temp is sitting between 30C-35C idling , each time I was starting the pc the nvrla.exe spike it , this issue supposed to be addressed by Nvidia geforce experience update but it didn't .

In anyway thank you all and probably will follow seabass tutorial for the Bios


One question where about I can find the core setting in the bios ? Thanks .

In regards to all core curve optimized for my Bios it is under Advance Tab, then AMD Overclock, then PBO Advance, then Curve Optimizer 

I guess there's a limit to how long after a post one can edit.  Sorry for the new reply.  I just want to say that I've been experimenting around.  For some reason it's still consistently showing lower temperatures with an all core manual clock setting despite the fact that this means most are higher most of the time during normal loads (and especially during heavy loads.)  Since PBO is no longer truly doing its thing, I'm looking into manually setting voltages.  I had an "offset" option in my BIOS that I thought might help, but I had near instantaneous errors in Prime95 on one of the threads (the fifth core I guess) with even the smallest negative offset.  When I run Prime95 with normal auto voltage it starts off at something like 1.275 under light load and goes down to 1.1875 or something like that under heaviest load (I didn't write down exact numbers.  *Shrug*)  I've already gotten away with setting it to 1.2500 in the BIOS (which seems to actually lower to 1.1062 under heavy load) though more testing is still needed (at least it's stable for a good while for sure) and the absolute maximum temperature has lowered from 80 to roughly 69 (it can hit 70 but never stays there.)  I'm not sure why it's higher under low load versus high load except I guess even good PSUs (at this point I have one that should be overkill for my current system and I'm not even pushing the GPU at all during my CPU testing) drop some when current goes way up (the CPU is showing just under 100 watts under full load.)  I'll have to keep testing to find the minimum setting I can get away with both under mixed load conditions, but it looks like it's definitely going to be able to go lower than what it was using and that even setting it down a tiny bit has a HUGE effect on temperatures.


So I guess if you really want to get that maximum temperature down -- especially with the awful stock HSF -- you might try doing both a fixed frequency and a manually set lower voltage and see what happens.  I am definitely curious to know if my results here are atypical or not.


EDIT:  Much testing yet to do (a bit over four hours Prime95 at my current settings) but I have gotten it down to 1.1000V (which drops to 1.0875 and rarely to 1.0812 at maximum load) and the absolute maximum is 70C which it can't hold (it usually stays around 66C only rarely in some Prime95 tests going up to 69 and crossing into 70 from time to time for a few moments.)  Idle temperature is a little higher (I haven't let it settle long yet, but it looks like 27C may be the current idle in Windows -- versus 24 before) because it's no longer automatically dropping the voltage down during low load due to the manual setting, but interestingly this is actually really good because the actual ups and downs on the temperature changes are a bit slower (though the initial jump under full load is still a very hard and fast change, which is just not ideal -- at least it's a bit slower and a lot less high compared to stock.)  Thermal cycling is actually the biggest killer of silicon dies rather than just temperature alone as long as you're far enough below its maximum tolerance, so the fact that under normal loads the temperatures change a bit more slowly is actually a good thing.


I thought of going to a higher frequency with lower than stock voltages, but it seems that the voltage curve starts to get pretty steep at this range.  4.4GHz requires enough higher voltage in initial testing (not even necessarily stable yet) that temperatures were still too high for comfort.  4.5GHz and 4.6GHz require a huge voltage boost and may be barely (if at all) below stock for those frequencies with significantly higher temperatures (I saw power usage hit 125W in initial testing before I shut it down as not being what I wanted.)  It may vary depending on the chip, but I think 4.3GHz is the magic number for the best balance of speed, voltage, and temperatures for those who want a chip that really lasts.



I was thinking about it and I think I've figured out what is going on with this TDP stuff and all.  I'm convinced that the 5600X is NOT 65W TDP and is, in fact, considerably higher and I think the way this works is they just saying its temperature tolerances are enough higher that a HSF intended for 65W TDP can still keep it in the higher ranges that they now consider acceptable -- combined with thermal throttling (which has negative performance impact of course) -- and working under the assumption the user will never actually use it close to 100% (so other cores can sleep from time to time.)  Of course, that is actually really tricky because different chips will have different tolerances due to manufacturing variances, different cases will have different airflows, ambient temperatures might be higher than they expected (or humidity since this does have an effect,) etc etc and of course users have different use scenarios for their computers.  Thus selling it as 65W TDP if this is actually true seems kind of bad.  If I'm right I'm definitely curious what the true TDP for this thing actually works out to.  The HSF I'm using now is actually huge and originally had two 120mm fans (I swapped in one high static pressure Noctua in the middle with roughly the same result) and I still could actually see it go almost to 90C with stock settings under maximum load before I started tweaking things.  It's producing insane amounts of heat under real loads and I think they just didn't count on anyone actually doing things that could push it a lot (like encoding or compiling or whatever.)

I've now reverted my manual PBO limits (previously 95, 60, 85 ppt/tdc/edc) in favor of motherboard (B550-f) after the latest BIOS update with agesa Temps are fine with CB MT hovering around 70C and clocks averaging 4.35Ghz.
Scalar: auto/1x
Boost: +100MHz
Pre-defined -ve CO enabled: -25, -13, -27, -27, 0, -10

This gives me better gaming performance (higher min framerates) at the cost of a few points on Cinebench MT scores. I guess this way the power delivery is more efficient for the CPU therefore it won't run as hot. Games don't push the CPU as much so it can run faster and cooler.


Journeyman III

With PBO set to motherboard it will definitely hit your temp cap right away, that's the point of PBO, to uncap power limits.


Most of this thread has been utilizing basically the stock PBO with the main overrides being to actually lower things instead of raise them.  This is not an overclocker (or perhaps we should call it overspeccer instead with modern CPUs) thread.  We've been discussing using lower limits on maximum temperature, current, etc and even lower voltage curves, not raising them and certainly aren't discussing using motherboard manufacturer performance-oriented overrides.  Remember that PBO itself is basically a stock feature of these CPUs with motherboards merely offering varying overrides (generally intending to increase performance, but in this case we're focused on decreasing temperature, potentially at some cost to performance, though likely resulting in it being a bit better due to the lower thermal-prompted downscaling.)  The whole CPU boost thing has been a norm on AMD processors for the past several generations (even before it was standardized in the whole PBO thing) and it's basically a stock advertised feature.


Actually, I do want to point out that the PBO curves being used are actually shockingly short-sighted scaling policies to the point of very likely actually being less effective than if they did this more intelligently.  What I observed with all stock and no overrides was it hit 4.6GHz for only a very short time and then dropped to 4.2GHz almost immediately in real life usage due to the sheer amount of heat it produced while doing this and then it slowly went down more and more from there, settling on the minimum 3.7GHz relatively quickly in actual practice when under heavy load.  In other words, the stock PBO curve (and, by extension, anything that increases rather than decreases it) actually decreases overall performance by making it scale down pretty quickly and stay down if the load does not extremely quickly drop.  (Aka if you're playing games or doing anything heavier than that.  Things like encoding are definitely out and will drive you crazy with all the fan noise!)  As I've already been seeing in my own testing of manual overrides, it's pushing the CPU too high for too long, resulting in a buildup that it then can't really get rid of (though I still don't have an explanation for why it remains quite as high as it does on the lows with PBO versus manual overrides -- remember, when I initially set a fixed frequency at 4.3GHz instead of the minimum 3.7 this CPU goes down to at the worst I still had the voltage setting on auto, so I'm wondering if PBO was actually setting the voltage even at lower frequencies higher than it should or something stupid like that even when I set a negative curve later.)  I really feel like the PBO of this line of processors -- whether stock or a performance override -- seems to be heavily tuned towards unrealistic conditions.  In fact, I feel like the PBO curve just isn't optimized at all for the actual target of this line of CPUs.  This sort of curve would work incredibly well on a Ryzen 3 aimed at doing something like 60 watts maximum power usage, but my 5600X can hit 115+ -- rarely even as high as 125W -- when pushed.  A curve that aims at pushing the absolute maximum less (eg just hit the maximum boost for very short bursts rather than trying to hold it as long as possible) and stays at a frequency that uses a much lower voltage for a longer period of time would produce better conditions.  "Race to idle" only makes sense if completing the current workload means the workload ends and idle is actually established, but we're buying these CPUs for gaming and such and race to idle doesn't make sense since idle is at a proprietary point when the player stops playing (which is pretty much always going to be long enough to push the CPU.)  Actually, in an ideal world there should be different curves for different operations (for example, gaming might manage, say, 4.3GHz on this CPU for an extended period of time with the stock HSF, but encoding would have to max at something more like 4.1 with the stock HSF probably.)  Obviously that isn't realistic for the CPU itself to know and I would only propose just shipping gaming-oriented processors with a PBO curve aimed more for gaming.  (Although, that said, this does leave a potential leeway for third party software to externally impose different curves based on what they can observe from system usage.)  I would definitely say the processors we've discussed in this thread are, more than anything, aimed at gaming.


I've seen a lot of "AMD considers it to be perfectly normal for these CPUs to run this hot" but one question I'd like to see answered is "does AMD say this under the assumption that no one will still be using these CPUs in two years?"  There is a strong trend in the field for people to feel they must upgrade their components every two years or so and very few people actually try to send a failed CPU back under warranty if it doesn't fail within less than a year (really generally people only bother with returns back to the store that sells something which, in most cases, is only 30 days, though CPUs for some reason are only 7 in some places -- I presume due to too many overclockers pushing too far and trying to make a refund.)  Not all will be pushed to their max (and again there are people who actually do buy a 5600X or better for Web browsing, lol) and many will last out that time period, so if they don't just start mass failing within a short period of time it probably won't be considered a failure.  Now this may not be what they are thinking, but I would very much like to see that question addressed officially at some point.  I can certainly tell you though that my goal is to make the CPU last longer than even the warranty period (this is why I got one that is supposed to have some overclocking leeway -- so later down the road when it starts to really fall behind I can try to squeeze a bit more out of it.)  I do have to point out that after the recent chip availability crisis (which seems to be improving a bit but may potentially happen again -- perhaps multiple times in the future at this rate) short term considerations of "I'll just toss this CPU out in a couple of years anyway" might actually be a bad idea.  I don't think we'll see something like that whole mess with the GPUs, but there may still be a lot of situations where it just isn't optimal to rush to upgrade every two years for a while as people have largely been counting on.  I've seen other people ask this question to various degrees, but I've yet to see any true answer to it.  I do want to point out that CPU box saying three year warranty on the back doesn't actually guarantee the CPU will last 3+ years, only that if it doesn't they'll be willing to replace it if you're willing to jump through all the hoops (and even then there are a lot of conditions in which they'll say "sorry, your fault" and then not replace it...)  In fact, I'm not even targeting AMD only with this question as Intel sometimes seems to do this too (remember the late generation Pentium 4s where they had to push them super hard to remain competitive with AMD's generally better performance in normal home usage?  Those CPUs easily ran close to their maximum temperatures all the time too just to try to stay relevant since that long pipeline was just not an optimal design choice for home desktop CPUs.  However, at least they didn't jump up and down so FAST...)  I think maybe some modern Intel processors may do the same thing, but atm not as many.  The 5600X is more of an upper midrange than a "give me the absolute maximum possible" kind of CPU and shouldn't be pushing itself to its limits so easily.



BTW, my final test result on my 5600X is 4.3GHz would go overnight stable at 1.10625V setting on my system.  This drops as low as 1.0875V for a split second from time to time and goes as high as 1.1312V for a split second at times, so that seems to be the full stable range for my chip.  It seems the weakest link is core 5 which is always the first to go unstable when I mess with any settings, but at this point core 5 seems to be happy now.  If it had not been for that one single core this one would have binned higher by the look of things as the others all seemed to have a much higher tolerance.  The auto voltage setting sets 1.232V, so this is a pretty good decrease in voltage from stock.  The absolute maximum temperature under Prime95 was 71C (again third party cooler though, so obviously this would be quite a bit higher on the stock cooler and less than 4.3GHz may be necessary on those.)  Normal heavy usage in gaming and such is 40C (ok, I overdid it on the cooler, but I want it on the record that I initially picked the cooler based on what the CPU was doing stock and, after all, stock was hitting 120, even sometimes 125 watts under heaviest conditions versus now the absolute worst my CPU does now is 95 with the norm being more in the ~60-ish range for gaming.)  YMMV, but maybe 4.3GHz @ 1.10625V may at least be a good starting point for people to experiment with on a 5600X to find the best balance of things.  This is definitely better performance than jumping to 4.6 and dropping quickly down to 4.2 and eventually 3.7 under heavy usage with the stock cooler (and with the upgraded cooler it was something like 3.8GHz for the minimum it eventually settled on I think with stock settings with still a lot of noise.)  I guess this is why so many people go to AIOs with this thing on stock settings, but now I can still go pretty cool and quiet on air with probably better long-term performance than many would see on stock and I'm pretty sure I could have done so on a much cheaper cooler.  (But hey, I'm pretty happy with the long term prospects of my CPU running at 40C in games and such, lol.)



This is a very interesting subject, I am a beginner and I have one simple question I would like an answer for It please .

First of all I don't like how my CPU is running too hot all the time for example when I play Call of duty warzone it hit up to 75C but mostly it's at 68C and 69C , while playing fortnite it's around 60C , rocket league 55C and my CPU is setting at default no curve optimized , but PBO is enabled by default in the bios as well as core performance boost .

Now I would like my CPU to last a bit longer so what would be the easiest way to keep it less hotter ?

I am using AC800 cooler tower from Aorus and it seems decent so far .

I remember when I disabled the core performance it didn't really affect that much of performance , wasn't noticeable but again I was told by so many people don't disable it ! But yet again when I did the temp decreased a lot from what it use before , while the PBO didn't really make a big difference disable it it enable it not sure what difference it does ?


Is there any possibility to decrease the temperature ? An easy tutorial or easy step by step . Thanks .


Well, the short answer is that the way modern CPUs work is a lot more complicated than it used to be (technically in a good way, but it makes tweaking harder) so there isn't really a necessarily a super easy answer.  This thread is actually one of your best resources on this.  In particular, reducing max power and the curve of PBO can help if you want maximum performance.  Basically the short of what you already saw is disabling PBO only disables specific tweaks, whereas disabling the core boost entirely of course gives significantly lower temperatures since, of course, it no longer can push the frequency or voltages up at all.


However, beyond the curve optimizations you might also consider doing what I've done instead.  I was able to set the curve optimization all the down to the lowest value of -30 on my board and it seemed to be stable (maybe not overnight stable though because core 5 seems to be picky) and lowered the other stuff a bit and I did get better results but it wasn't very satisfactory for me since it still was going up to the 80s even on this huge HSF.  When I actually decided to just set a fixed frequency it went down a huge amount almost instantly.  In theory a fixed frequency should not be so much better, but for whatever reason I saw a significant drop in temperature the moment I did it.  This does mean it's a bit less efficient at idle (though it uses a lot less current when idle so still uses less power -- the power efficiency of modern CPUs is nothing to sneeze at and consists of more than only downscaling it seems) and the idle temperature was ever so slightly higher (something like 3C, no biggie) but maximum temperature under heavy load was significantly lower for me.  What's more, a fixed frequency does also allow you to manually set a specific voltage.  If you set a lower voltage you will definitely see a positive effect on lowering overall temperatures.  However, lowering voltage requires some heavy testing to get it overnight stable (the Prime95 blend test may actually be the better choice for the overnight test, but you can use the small FFT test for a very quick test that will show you quickly if it's just really bad during initial testing.)  Note that if you go too low it could potentially not boot far enough to even get into the BIOS to change the setting, so either go down very little at a time testing all the while or make sure you know how to reset your BIOS (and reconfigure anything that needs to be reconfigured.)


If it helps as a starting point, what I found on my chip -- and yours will probably be different -- was that setting it to 4.3GHz fixed frequency and 1.10625 for the voltage was overnight stable for me.  You can probably start at something like 4.3GHz with, say, 1.1250 and make sure that is stable, then try going down from there and see what happens.  1.1250V should be stable enough to at least get to the BIOS and change it back if it doesn't work I think.  Probably.  Keep going down until you find the voltage that won't pass a test overnight and then go back up at least one step from there (maybe two if you want to really be sure.)  Your chip absolutely will differ from mine, so it may be you could go higher than 4.3 or you may even have to go lower (you'll have to see what temperatures do in Prime95 and in gaming at 4.3, but from what I saw the power and voltage curve from 4.4 on up seems to turn up quite steeply on this chip and 4.3 is probably the best balance.)  Given that, by turning off the boost entirely, you should be running at only 3.7GHz, you should actually see a positive difference in performance in a lot of ways (plus no frequency governor means no scheduling/scaling responsiveness issues.)  Using a lower voltage in particular should help a lot (and I was able to lower mine quite significantly from the stock autodetected voltage.)


Anyway, it's a bit of a process and I can only wish you the best of luck.  This is, unfortunately, how modern processors work.


Hi Nazo,

Thank you so much for this detailed information , I have picked this : "by turning off the boost entirely, you should be running at only 3.7GHz, you should actually see a positive difference in performance in a lot of ways (plus no frequency governor means no scheduling/scaling responsiveness issues.) "

So if I turned the boost entirely , is it going to affect anything on my pc ? Most of the time i use my pc for gaming or browsing , will it perform as it should be or that will affect the performance of my pc that could be noticeable ?

I have already disabled it before and i didn't see much of difference but of course i have noticed the decrease of the Temp , my point is i want the cpu to last a bit longer because the way it is now when specially playing games , i don't think this cpu will survive that longer !

Please keep me updated , thanks.



Just to let you i have decided to disable the Boost entirely and i am  glad i did ! I have played Call of duty warzone at custom fps of 100 without any issue or loss of performance !!! The Temp is between 45C up to 55C lol , I honestly now i start to think this Boost is there to make your cpu don't last longer so you can go and buy another one after a year or less than a year looooooool .

Anyway i am happy with the outcome and the performance Thanks

Journeyman III

I just upgraded too a 360mm AIO just couple days ago and temps went to 50s

but again with my settings and the Stock cooler it was 60s for me 


Your cooler is not designed for 5600x, especially since it is a simple aluminum bar that can dissipate heat only from budget processors. 5000 line of processors they have too small a crystal and therefore most even water cooling sometimes can not cope with them, there is too fast heating (heat stroke) that the cooling system does not cope (because of the small crystal under the cover of the thermal distributor), Buy either your own, but better a cooler that is able to dissipate 200w or more heat. Never use box coolers to cool this kind of processors, ryzen 5 3600x can still be cooled with a cooler like yours.


Who are you responding to?  What heatsink are you talking about?  If you are responding to the first post even though this is page 6, then I guess you are specifically talking about the stock HSF.  You don't need a HSF targeting 200W TDP though.  The actual energy usage of the 5600X can't exceed I guess something like 140 watts absolute max in some pretty extreme (unrealistic) conditions. Actual gaming likely never exceeds probably something like 90 watts of energy usage, possibly more like 80.  If they could produce 200 watts of heat from ~90 watts of energy, AMD would stop making CPUs and start underselling every power company on Earth and becoming so rich they could buy Earth itself.  The real problem is the HSF packaged with these CPUs just can't handle them (and frankly I'm not entirely convinced it even can handle the 65W TDP it targets.)


Also, I have no clue what you're even saying about the heat being so micro-focused.  The only way what you describe would actually exist is if the CPU did not even have a heat spreader at all (it does!) and HSFs were not designed to contact the full area of the chip (they do!)  This just simply isn't applicable to the 5600X and any of its applications.  The whole specific purpose of these components is to spread the heat out away from the chip core so that its small size no longer is an issue.  You're talking embedded systems design in a thread about desktop computers with desktop CPUs.


BTW, what do you mean a "crystal"?  I don't think modern CPUs even use crystals (I'm given to understand that PLL designs handle the functions you'd otherwise use a crystal oscillator for,) but if they did, the heat production of crystal oscillators is probably in the lower nanowatts range and just isn't even a consideration in electronic designs.  Do you mean the actual chips?  I have never heard anyone refer to a CPU chip as a crystal before though.  CPU chips are not crystalline and can't correctly be called crystals.


Hi Geforsikus,

My Cooler is ATC800 from Aorus and according to their specifications :

  • 6 x Ø6mm copper direct pipe, efficiently dissipates 250W CPU heat.

Probably I need to adjust the fanns , because I always run the fan as standard and never tried runnning them Performance when gaming ?

But definitely this tower CPU cooler is designed for multicore CPU according to their website .

Journeyman III

I read a good article about the Energy saving component in windows. Just hit your windows start key, start typing energy until you see power and sleep at the top, select that and change your energy saving option to either balanced or power saver mode. Mine automagically set mine to AMD RYZEN Balanced and my temps were like yours out of the box. Setting it to Windows Balanced fixed all my higher than normal temps. Looking at HW info now shows 35 while typing this on a stock cooler.

Adept II

Normally this won't make a difference.  It should generally default to a balanced power profile upon a new installation.  If you didn't change it yourself perhaps a friend or family member did, but it should not have been set to high performance by default and should have been on balanced by default.  On its own this really doesn't make that much of a difference.  Mostly at that point Windows more just lets the processor do its own thing.  Modern CPUs already have scaling built into the motherboard and even the CPU itself now, so on its own the Windows balanced power plan is actually an outmoded feature that generally just says "let the CPU do its own thing" at this point.  Remember, too, the problem wasn't the CPU sitting completely idle.  It's when under relatively heavy load.  Try running a really heavy game and check your temperatures during the game.  And I just dare you to run Prime95 and watch the temperatures...  (I don't know if you ever might need to do stuff that can 100% the CPU like encoding, but I think you'll quickly see that with the stock HSF it will run the CPU right up to the limit in a shockingly short period of time, thermally throttle all the way down to completely unboosted, and possibly still push up dangerously close to the limit.)


This does remind me of a possibility that could actually partially help address the issue without changing the heatsinks even in regards to dealing with heavy gaming and such.  It is actually possible to modify the Windows power profiles.  I'm not sure to what extent this can override the CPU's built-in boosting function, but if it could, someone could make a custom profile to make the scaling less aggressive it may just go a long way towards helping people who don't want to change the HSF.  There are tools to do this if anyone with the patience would care to experiment.

Journeyman III

The Ryzen 5 5600x, in case it heats up something, but not to be at 56 ° C at rest, there are other factors to consider such as your cabinet, is it closed or open, air flow, room temperature etc, my recommendation and that of many is to use an aftermarket cooler that does not necessarily have to be liquid, in the market there are many air dissipation options such as the famous Hyper 212 EVO, Thermaltake Contac Silent 12, Veetro V5, among others, which are around $ 30 and they offer very good performance, the fact that the temperature reaches 96 ° C is something exaggerated that it is not that it will damage the CPU, but if it is going to limit its performance, on the Internet there are many videos about that processor and they talk about how keep it at an acceptable temperature, I hope the information helps you, greetings.


Well that was hard to read...

@patrictmcBride wrote:

the fact that the temperature reaches 96 ° C is something exaggerated that it is not that it will damage the CPU

Well, this isn't quite true for several reasons.  For starters, 95C is the official TJMAX for the 5600X, so if it hits 96C the components are actually being physically damaged at that point (but it should shut off before this happens.)  I think you meant to say 95 rather than 96, but that's still wrong because TJMAX is already the point where damage begins.  90C is the stock thermal throttling limit and that's what you really wanted to say because 90C is the point where it starts underclocking to try to get temperatures down and should generally automatically prevent itself from going above at the cost of performance.  (Though, of course, you don't generally buy a CPU listed as having a boost speed of 4.6GHz fully alright with the idea of it being at 4.2GHz just a few moments into any sort of heavy load.)


But also, as we've already discussed earlier, it's not quite that simple.  Sure, 90C is officially ok in that it won't immediately tear up.  But that doesn't mean it doesn't have any effect on the CPU at all.  It's not an absolute rule by any means, but there's a general rule that for every 10C increase in the average temperature of something its total lifetime is roughly halved.  This is loose and no true rule by any means, but it does still apply to some extent even here.  Running a CPU at a temperature that high for any significant length of time is not good for its long term lifetime.  What's more, as I already tried to explain, fast temperature changes are also a significant wear factor on CPUs -- probably more so than maximum temperature alone (at least when staying below the point of damage.)  These are particularly bad at that -- so much so that I'm legitimately worried in general for them in fact -- in that they can hit maximum temperature almost immediately in any load (compressing something, encoding, etc etc) and then instantly drop way down to a significantly lower temperature just as quickly the moment the load level drops.


The problem is, long lasting just generally isn't very high of a concern for most people and I believe AMD is just adapting to the market in that.  AMD aimed for performance probably at the expense of long term durability by setting it all up to run hot and fast.  (Though, ironically, this actually may hurt performance a bit too since it maxes out then throttles so quickly.  But it should pass most benchmarks at or at least close to maximum boost speed and that's all most people see.)  Far too many people replace components roughly every two years and just have no concern for whether or not the components would last them longer if they needed them to and AMD designed this generation with that market in mind.  But not everyone can do that.  And the next jump means a new socket type (so new motherboard required) as well as new RAM (we're going to DDR5 now) so the next step up is going to cost much more than just a new processor anyway even if you do have the funds to keep throwing out your old CPUs every couple of years or so.  Not to mention recent troubles with chips in general (and no new factories for complex chip production going into operation for quite a long time yet -- with the only one I know of being Intel anyway.)  There is no guarantee processors won't get rarer and more expensive again in the near future.  So, again, I think it's a good idea to actually make your CPU last even if normally you might not.  That means if it's hitting the thermal throttling limit on any sort of regular basis, you really need to look into options to get that temperature down.


But, luckily, this thread addresses several possibilities of doing just that.


All modern CPU's run fast and hot for maximum performance, both Intel and AMD's alike.  They're not like the CPU's of years ago.

By default, TjMax is the default 'throttling' limit.  So for 5600X's, it's 95C.  The 90C limit is for the other 5000 series Ryzens.  My 5800X will run right up to TjMax and stay there if the load is stressful enough.

If high temps bother you, there's a very simple way to lower the limit - there's a setting called Platform Thermal Limit in the BIOS.  By default it will be blank or 0 or Auto or whatever they want to call it, but if you key in a number, that becomes the new temperature limit.

Don't want your CPU to ever exceed 70C?  Put 70 for the thermal limit and run some tests.  You'll see the CPU will clock/undervolt itself down to respect it.



We have already addressed every single bit of this in this thread -- including better ways of handling things.

Adept II

Ok, so coming back to this, I have discovered something very very interesting.  So in the PBO settings there is a max core boost override option which is probably present on most boards.  This is kind of useless if your goal is to limit temperatures and better the lifetime of your processor because it only goes up -- at least it only does on this MB -- and of course that means higher temperatures and decreased processor lifetime.  I had to redo all the settings in my BIOS due to having reset it and I discovered something interesting.  In the "AMD Overclocking" section (the one with a warning you have to accept to go into) the max boost option can be set to a negative sign!  With this I can set it to limit the maximum boost below the 4.6 it normally does down to the 4.4 that I've found is the maximum before the voltage has to go up on an exponential curve.  For some reason I have to set it to negative 250 to subtract 200 MHz from the max (so I guess it's not a MHz scale whatever it is) but at -250 it stays exactly -200MHz from the stock max boost (eg 4.4 instead of 4.6 max.)  Because of this I can have the CPU back to scaling properly as it's supposed to do and decreasing power usage and temperature a little bit when idling.  I guess now I have to more or less redo everything.  I don't actually want a lower PPT or etc since I want the CPU to be able to completely max out now that it stays on the good side of that voltage curve, but of course I still want to decrease voltage from stock so have to retest it with the PBO negative voltage settings instead of an all core voltage override for the best results.

EDIT:  I see apparently the 5600X actually boosts to 4.65 rather than 4.60, hence why I need -250 to get to 4.40.


I would suggest that everyone who wants to get their 5000 series processors under control should check that overclocking menu and see if it has such an option.  Keep your CPU on the good side of the voltage curve and it's just much more optimal all around.  (I would argue that performance even is better despite technically being a lower maximum frequency because by building up heat more slowly it doesn't go down as fast as it would on stock either, meaning it stays up higher for longer.)  You may need to set a different value from -250 depending on which processor you have, but my guess is it may work at least close to that for several different models since it's ultimately just down to where the voltage curve ramps up exponentially during boosting.  This looks like a great way to strike a much better balance of things and get more lifetime and possibly real scenario (not benchmark obviously) performance out of it.


I've been testing and so far I'm actually getting a bit cooler temperatures as should be expected.  Since I can do per-core voltage curve optimizer settings, I can set better cores to even lower voltages.  In fact, so far it's looking like core 4 is the primary reason my own 5600X didn't get binned higher as all others but core 1 go all the way to -30 (with core 1 so far looking to be stable at -29) but core 4 stubbornly requiring at least -20 (I may have to raise it even more -- still testing.)  I've seen a drop of several degrees from the absolute maximum I got with the fixed frequency + voltage override thanks to this.


Word of warning.  If your motherboard does it like mine it seems the normal PBO max boost setting (the one in the normal PBO section rather than the "AMD Overclocking" section) actually overrides the overclock section's setting.  I had accidentally turned that option on so I set it to 0 and it override the overclock setting, putting the boost back to the stock 4.6.  I had to return the normal one to the auto setting for the overclock section's negative override to function.


PS.  Core 4 sucks.  What a jerk.


Well, further testing needed, but it seems like it still performs notably better in several things with a fixed frequency of 4.4 rather than setting the boost to do 4.4.  I have seen some very noticeable hitches in stuff like No Man's Sky when loading terrain for instance.  That may be down to something else, so, again, more testing needed, but it does make sense that it might handle multiple threads of varying demand better when not downscaling cores under lower loads.

Journeyman III

My problem is that my PROCESSOR increases by 1 degree to 2 degrees per second and reaches 105 degrees and turns off, even though I have a liquid cooler.
I didn't find the problem, according to me, I made a software undervolt, fixed it to 3300 and fixed the voltage to 1.02, but yesterday I encountered the same thing again, not every boot, but if I turn it on 10 times, it does it 3 times, what's the problem?


If it hits 105 and shuts off even with lowered voltage and a fixed frequency of 3300 MHz, then something is wrong with the cooling itself.  The obvious thing would be to check to make sure nothing is loose, but based on the fact that you see a slow increase instead of random instabilities or fast increases, my bet is the cooling system itself isn't working right.  Most probably I would think the pump isn't running, but also, remember that a water cooler still relies on dissipating the heat at the other end, which usually means a fan running across a heatsink if you don't have a giant passive cooler, so make sure that is running too.  I think they often have a connector to plug into motherboards, but maybe not all motherboards can actually handle them (a typical case fan is something like 0.3 amps, so if the board were designed to assume something in that range it would not handle running a water cooler well at all.)  Some have dedicated AIO connectors, so make sure you're using that if yours has one.  Otherwise try an adapter to run off of the main power supply directly if possible. 


If none of that makes any difference, it might just be a hardware failure.  I don't know if you've had that water cooler for a while or if it's new, but if you've had it a while, don't forget most AIOs in particular have pretty much a built in limit on how long they're going to last of roughly 2 years, give or take.  Even good real water coolers can still eventually wear out at joints and of course the motors still aren't infinite, so they still have a limited lifetime (albeit usually better than an AIO.)  Under the circumstances I would say if it is hardware failure then you're better off with the stock cooler -- even as terrible as it is.  You can certainly get a lot better than running 3.3GHz out of the stock cooler at least.  The stock cooler can't handle these processors out of the box, but it can certainly manage a lot better than 3.3GHz.  As I said before, if you limit it to roughly some 250MHz below the maximum boost clock, this alone makes a gigantic difference in regards to temperatures (though of course this is best when combined with an undervolt.)  The stock cooler can definitely handle these CPUs in normal usage if you run below that point where the voltage shoots up so much.  It may not handle heavy encoding or that sort of thing though.  You'd probably still have to go a bit lower to use the stock cooler for that, though if you're setting the limit in the PBO settings themselves then it will automatically lower as it hits temperature limits.


BTW, hitting the maximum temperature and shutting off isn't good for the processor.  The emergency shutoff helps a lot towards keeping them from instantly frying themselves, but it is very definitely not good for it to have hit that point so many times.  It's distinctly possible it won't last terribly long even if you get it back under control.  You definitely can't put off fixing it regardless if you want it to last at all.  Definitely fix it one way or another ASAP.

Journeyman III

Thanks a lot for your attention


No problems. Did you ever figure out what the cause of your issue was? I'll admit I'm a tad curious.