cancel
Showing results for 
Search instead for 
Did you mean: 

Drivers & Software

siliconalley
Journeyman III

RE: 20.12.1 Optional (WHQL) Dec 8th, 2020 Driver causing GPU usage spikes

System tested on:

*Ryzen 7 5800X, 8-Core / 16-Thread (PBO Enabled)

*ThermalTake Silent12 Direct Heat Pipe CPU Tower Cooler (150w TDP rating)

*ASUS X570-P Prime (latest BIOS 3001, Re-size BAR/S.A.M. compatible, PCIe 4.0 compatible)

*32GB (4x8GB) DDR4-3600MHz G.Skill Ripjaws V CL18 1.35v with SOC voltage fixed at 1.1000v for dual-rank, 4-stick stability

*ASRock Radeon 6900XT "biggest navi" 16GB GDDR6 Triple-Fan Graphics Card

*850w Gold Rated PowerSpec PSU with Two Independent Graphics Power Rails capable of providing 150w on each plug + the 45w available from the 16x PCIe lane itself

 

First and foremost, the 20.12.1 Optional (WHQL) driver out of the box after install has TERRIBLY optimized GPU fan curves, along with excessively high voltage.

I noticed immediately that the temps shot straight up to 80-84 degrees upon entering any game engine, then the core clocks would spike down from the 2250Mhz to 1000Mhz to try and cool off, hence thermal throttle out of the box.

HALF WAY FIX:

Using the included Radeon software, I 

-Manually set the minimum frequency to 2450MHz and the max frequency to 2550MHz as a stable OC, also proven by GamersNexus (to prevent such large clock speed swings during gameplay)

-Manually set the memory frequency to 2150MHz

-Scaled the power limit all the way to the right (15) allowing the GPU to harness however much additional wattage it needed

-Manually set the fan curves to hit 80% fan RPM capacity at 65 degrees celsius or higher

-Undervolted from the stock 1170mv to 1070mv, which never crashed any games and proved perfectly stable for the power hungry card and its clocks

 

Upon testing these newly overclocked and undervolted settings, my temps finally operated within the industry norm for open fan shroud GPUs of 55-65 degrees celsius, while getting higher max FPS with the overclocks. Amazing right? Not quite.

 

Throughout gameplay, I noticed though my CPU usage was barely pegged at 19-20%, my GPU usage would drop from 100% to a low of 80% in-game. This introduced an enormous FPS lag spike which produced game pauses dropping from 280+ FPS to 130FPS. I thought it was odd, since my GPU temp was also a stable 62 degrees everytime this happened.

 

Verdict:

It's not a CPU bottleneck (obviously, Ryzen 7 5800X showing sub 20% usage)

It's not a GPU thermal bottleneck (maintaining stable 62 degrees, far from the Tjunction)

Average wattage draw on the GPU reads 289-300w on Afterburner

Average wattage draw on the CPU reads 80-90w on Afterburner

It's not a power bottleneck on wattage draw

 

Just to note, this DOES NOT occur on graphically demanding titles like Cyberpunk. This immersion-breaking GPU usage fluctuation and FPS spike only occurs when I was playing simpler game engines, such as Overwatch (high FPS eSports). Due to the use case on when this problem occurs, it sounded like and felt like a CPU bottleneck, but again I'm on a powerful Ryzen 7 5800X with PBO enabled, showing only 19-20% usage.

 

I've tried enabling and disable Re-size BAR/S.A.M. within BIOS - didn't fix it

I've tried manually setting the PCIe 16x lane 1 to Gen 4 within BIOS - didn't fix it

I've tried manually setting the PCI express link state power management to OFF within Windows advanced power settings - didn't fix it

 

Has anyone come up with a possible fix for this? I am almost at the point of just returning my Radeon 6900XT and getting an RTX 3080 10GB card instead.

 

Radeon software settings