Hello, im new here and im dying for some solid advice since i've tried everything i could..
last week i purchaced an RX vega 56 OC edition (air boost), and started playing my games. i have a 144Hz monitor with freesync enabled in both Catalyst and my monitor, and i noticed my fps is terrible at some points and my game stutters all over the place. i started monitoring my performance, and ive see that in almost ALL my games (for example, BO4, BF 5,Witcher 3, Farcry 5) my gpu usage can drop all the way down to 50-60% causing a HUGE fps drop leading to terrible stutter.
my CPU is not the weak point, i have an i7 7700 (which never reaches 100% and stays cool), 16 GB of 2400 speed ram and a 850W PSU.
all i want is for my new card to work at its maximum potential REGARDLESS of what graphic setting i choose. im reaching for 144 fps in all my games, and these low usages are not helpping me what so ever.
please,im dying for some solid advice!
*i have the most recent bios and GPU drivers aswell
Message was edited by: Matt B
That really depends on which PSU you have. It is less likely to be and issue if your amperage is sent over a single rail, but even in that case, you can overwhelm a single cables ability to deliver power.
You should definitely not be daisy chaining PSU cables, but i doubt it has any effect on GPU utilisation. If anything it will just make your system unstable.
Ok wow. i plugged a separate 8 pin in each connector, and the problem seems less now! utilization does jump around, but FPS seems to be alot more stable now which is an improvment! i still think its very weird optimizations in these games, since my CPU doesnt even come close to being fully utilized (why i dont understand the logic of the drops). another step towards improvment!
or maybe im just imagining things, lol
amdmatt is 100% correct. The issue in this case depends on the PSU and the power requirements of the GPU. I didn't see your PSU model on this thread, but some PSUs can have a 850W rating, but spit that power over four 12V rails for example. That means, each 12V rail has 212.5W available to it.
An 8-pin PCI-e power capable can supply up to 150W, so with two connected, the GPU can expect delivery of 300W. Unfortunately, if both GPU slots are occupied by a single cable, plugged into a single slot on the PSU it only has a single rail worth of power it can deliver, the 212.5W, resulting in an underpowered GPU. If you split the connects between two cables and ensure each cable is on a different rail (usually labeled on the PSU), then the power available is 425W to the GPU.
Sometimes, as I mentioned earlier, even if the PSU has a single rail (preferred for exactly the reasons above) delivering the full 850W, you can still have performance degradation by using a single cable. In that scenario, the cable would be required to deliver the full 300W to the GPU, which some cables simply cannot do. By splitting the cables, you ensure that a single cable never has to deliver more than 150W.
yah absolutely no doubt sharing one pin is a no no. Regardless of what the OP has going on there are tons of reports though of people with Vegas that actually undervolt and remove the throttling issues.
I have anecdotally read the same thing. Undervolting a Vega that is using the reference blower can actually give higher performance than the stock settings. The undervolting allows for lower thermals at the same clock speed which can help keep those clock speeds higher. Anything you can do to improve the overall case ventilation and airflow will help.
With my Vega 64, it would average around a 1450 MHz clockspeed when the reference cooler was installed. It would run in the 1500 Mhz range briefly, get too hot, and it would have to drop down to a lower p-state. Increasing the power limit via turbo mode, or a custom profile did nothing, it would still average around 1450 MHz, but it would be able to jump up to the full 1632 MHz briefly, but then it would just hit the thermal limit faster. I though about undervolting it myself for a while, but opted to put the GPU under a EKWB waterblock instead.
Now I have it on a 3% overclock with 1000 MHz on the HBM2 memory and the highest temps I see are around ~35C. It can hold a 1675 MHz clock indefinitely with the power limit set to +50%.
Exactly in line with what I have read, but I don't own one myself. I have a 580 which has it's own issues. To me no card out of the box should require all this tinkering to get them to work. This isn't like trying to overclock and having to play with settings. These cards of late don't work as they should period without a good amount of adjustment work. So I don't agree with Matt that there isn't a driver issue. There sure is an issue somewhere, many of the older cards that now require tweaking never needed it before the Wattman drivers, the newer cards then either have bios issues, driver issues, or can't live up to the white paper specs so they must be wrong. Any way you slice it, it is a nightmare issue for far too many users, one that should be unacceptable by anyones's standards.
Sorry been days ago and never read your last comment. I re-read back through all this and absolutely understand and accept that the the testing results are correct. I appreciate Matt taking the time to do an contribute all the results. Frankly he is just awesome like that. I also understand what makes cards throttle vs. the power you give them, through changing power requirements based on demand. The question I have to ask is this: If that Vega card is cpu bottlenecked by that processor why? Lesser cards in the AMD line like my RX 580 then actually scale better with such cpus then the flagship product? Is that not a step backwards then from Polaris to Vega? It concerns me because I absolutely want AMD to succeed, I am an admitted fan. I know every review I ever read about Vega it is voiced the disappointment of how it compares to the competition in gaming. Yes I absolutely get it has areas of computational ability it absolutely shines. I am speaking strictly gaming, and overall stability. Through the last year though I have without any doubt already seen where a 1050ti (my first nvidia card in many years) scales better (by a lot) with a Phenom II x6 1090T than an RX 580 does. This now has me thinking out loud here that this next generation takes a step back from it's predecessor in scalability and that was already a step back from the competition. Just not a good trend line here. I want to be clear here that I wasn't disappointed that my RX 580 didn't do better with that card, I figured it wouldn't. I was just surprised to find out that the Nvidia card literally brought that CPU back to legitimate gaming viability. Now then my next observation is that the Vega was introduced when this CPU of literally the same generation and the 2nd fastest i7 of Intel's cpu's at that time. A CPU that with processing requirements of no more than 4 threads, which for 95% of even the very latest AAA games at this point are ample, and are still absolutely on a core to core basis in the same ball park of speed as AMD's current CPU products with Ryzen, and hands down faster than their older processors. The final question then is just how slim of a window of perspective possible purchasers with the right hardware does Vega make any sense at all to consider a viable purchase? I absolutely mean no disrespect at all, just trying to wrap my head around it because I literally was considering buying one myself for Christmas. Again more out of brand loyalty than research even knowing about a lot of issues I have read about. But reflecting on so many of the issues I read about and how it may potentially scale with my 7700k, I think I just need to wait for a future generation, and hope that generation comes before gaming requires me to change to something green again.
amdmatt This is the card I have been thinking about getting it seems like a great value at the moment. I am gaming on a Freesync 75hz max monitor at 1440p right now with an RX 580 and am not really unhappy other than a few games that do dip in high 40's, most recently though a new game actually took me in the mid 30's, maybe the games fault but had me thinking well maybe at this good price to pull the trigger and get a card that will truly max what I can do on my monitor and have some real legs for the games to come in the next few years. All of this has me wondering with my 7700k if that isn't going to be enough to drive this thing, then it isn't worth spending this money. What is you honest opinion on this, what do think of the card I am thinking of getting (link to follow) and what Power Supply Wattage and rating, do you think, not the listed minimum, that will guarantee rock solid performance without over buying in a PS as it seems they are actually trending down in wattage again at this point? That Card Link, $410 at the moment: SAPPHIRE Radeon RX Vega 64 DirectX 12 21275-03-20G Video Card - Newegg.com a decent 1070 ti that seems to have relatively similar peformance in benchmarks on-line wouldn't require me to replace my 650w gold plus PS, and is 30 bucks cheaper, and comes with a $60 game I actually want. Not expecting you tell me to buy the Nvidia card by any means just pointing out the current conundrum. That's litterally like $190 bucks cheaper to give me more and potentially do it better. Anyway a link to the card I am talking about for the 1070ti: ZOTAC GeForce GTX 1070 Ti DirectX 12 ZT-P10710C-10P Video Card - AMP! Edition - Newegg.com
Can you post the results of your Farcry 5 benchmark run? I found the results below off PC Gamer.
As you can see, the Intel Skylake part (i3-7100) performs far worse than the newer i3-8100. But what is interesting is that it performs on par with the AMD Ryzen parts. Since you have a i7-7700 and I am using Ryzen, I should actually be just as if not more CPU bound than you are, and should have lower frames at 1080P despite my faster Vega 64.
Those results look like what I would expect. It makes sense on the 8th gen i3 being that much better as they are essentially 7th gen i5s rebranded. However those are on a 1080ti and apparently as I questioned and it went unanswered. The Vega must be far more CPU dependent?
Vega 64 actually runs between the GTX 1080 and GTX 1080 Ti in this title. So the game should be slightly more GPU bound than the graph I posted. What is interesting with daniel91 results is the stuttering and drops he was experiencing. Based off the results above, even an i3-7100 gives a 97th percentile frame rate of 69.8Hz, so with his CPU he should be well inside the Freesync range of his monitor (70-144Hz) at all times. That is why I am curious to see his Farcry 5 benchmark result. Based off that data I am likely more CPU bound with Ryzen, so I should see lower overall frames despite the faster GPU. If that isn't want happens, then he must be throttling somewhere.
do you guys think this could be RAM related? maybe some configurations are wrong? i currently have 2400 mhz 8x2 dual chanel ram. could any BIOS options be the cause for poor GPU usage? there is an option to select XMP configuration (disabled or 1) and switch from normal to enhanced performance/stability and chill OC(gigabyte MOBO).
No I don't. Ram speed has pretty big increase on Ryzen systems but even there mostly doesn't break into double digit percentages. It has even less of a change on Intel CPU's so no I really doubt it. That being said 2400 you XMP speed? If so it is slow. So yes enable XMP and unless it causes stability issues it will make things a bit faster. Extra speed rarely ever hurts, thats for sure. Mostly settings in the bios aren't going to change GPU stability. Unless we are talking global stability by pursuing things like over clocks and voltage. Now if you are not on the latests BIOS that is the exception to this. Like software updates it is alway best to have any firmware up-to-date. Like anything else though there are even arguments against that. So best is advice is change one thing at a time see what happens, and don't start changing stuff unless you are having a problem. Don't create new issues you didn't have if it isn't broken don't fix it.
I just got 56 with same issues i followd: amd video optimization for gaming global setting(youtube) iv tested games like Crysis 3 very high setting or low 35 fps spikes , i could stand were it sits at 35fps and the gpu would only be around 1200mhz not even try to push it self to my max 1600, then tere parts were looking at the grass and flowers moving will max out giving me like 150fps . Have been tring to figure out why it doesn't when it needs to .
The physics calculations of the moving long grass in Crysis 3 are very CPU heavy, i am familiar with the area you are referring to as ive played that game a lot.
I just ran a benchmark, minimum:71 avarage:89 ,maximum: 121 , all on ultra preset no motion blur.
FPS is great when everything is chill, but the graph fluctuates a ton and causes really unstable fps and stutter (with freesync on).
really annoying. happens in many demanding games too.
A min of 71 and max of 121 doesn't really seem to bad. You are well above 60fps at all times. The stuttering you experience doesn't seem related to any catastrophic drop in fps.
The freesync range on your monitor is 70-144 Hz. Meaning that the monitor will run at a refresh rate identical to the fps being put out, until you go below 70 and LFC kicks in.
I must apologize as I tried to boot in Farcry 5 last night and got the good old Ubisoft "Access Denied". Turns out the RGB software from ASUS was tripping the DRM for some reason. I'll disable it this evening and give things another go.
I had the same problem last time.. playing Yakuza 0.
Fixed it after i changed my i5 6600k tu i7 7700.
Is it because the driver is optimized for 8 threads? Now i have high gpu utilization and game is buttery smooth at max settings.
Other adjustment.. downvolted to 1050mv for P5-P7 .
You could try this.
Hello Daniel, I have the same GPU with the same problem. It's not specific with this GPU but it's spread throughout all types of games and specs. Gaming on PC has become more and more difficult and incomprehensible compared to console, due to the multiplicity of configurations. This low use of GPU can come from driver, bad game optimisation, OS, the GPU itself.
That's why I created a post on Steam to shed some light:
To my mind, trying to reach 144 fps in all games is not only impossible, but it doesn't really make sense as I develop it in this Stream thread. Already at 30 fps almost all games feel comfortable. Aiming at 144 fps has more to do with a need for hardware performance than with a need to play in best conditions.