cancel
Showing results for 
Search instead for 
Did you mean: 

General Discussions

jammerjonn
Adept III

How to tell if I need to update my GPU

I'm using a new R7-370 GPU with 4GB of ram for my flight simulator ( X Plane 11). It's on one monitor, not overclocked and the cooling fans do not come on-( I'm guessing it isn't running too hot to turn them on) and I have the latest drivers. How can I tell if it's time to move up to an 8GB GPU? I'm not freezing up or tearing and it's seems ok. My system is AMD8 core 3.61 Ghz (not overclocked) cpu, 16GB ram, R7-370 Asus card, samsung 860 EVO SSD Windows 7 Pro. Thanks.

0 Likes
1 Solution
leyvin
Miniboss

You're original post confuses me, as you're basically saying "I'm having no issues... should I upgrade to an 8GB Graphics Card?"

If you're having no problems, such-as unplayable performance then the simple answer is no.

Still with this said, while 8GB VRAM is becoming "Common"., there are very few games that actually benefit from > 4GB VRAM (as this is the most common configuration, thus is what Developers, such-as myself will target).

On top of this keep in mind that as Game Engines switch from Classic API (DirectX 11, OpenGL or Metal) to Modern API (DirectX 12, Vulkan or Metal 2)., we're capable of manually handling resources...

This is a double-edged sword, given most Developers will initially struggle likely opting for Middleware Graphic Memory Management Solutions (such-as what AMD have on their Open Source, GPU Open Platform) resulting in similar VRAM usage., however this is only true in regards to PC Exclusive Developers; as Historically Exclusively Console Developers (such-as myself) have had very similar Resource Management throughout the tenure of 3D on Consoles.

You're already beginning to see this, but it'll become more common for the "Console" version to not actually be developed initially and separately from the PC version(s)., meaning not only will you have Developers who better understand their Engines (as opposed to 3rd Party Port Teams) but have substantially greater experience with Close-to-Metal / Low-Level Optimisation.

When it comes to Resource Management (VRAM)., the difference between a DirectX 11 Approach to a DirectX 12 Approach, can often mean up to 50% Lower Memory Usage (Frame-for-Frame) as well as considerably lower Data Bandwidth Usage.

This means that 4GB VRAM will actually remain more than viable over the next few years... and this will actually extend, at least on AMD with the advent of High-Bandwidth Cache (i.e. ESRAM Buffer, akin to what the Xbox 360 / One has) that separates the Frame Buffers (which at UHD Resolutions, with Anti-Aliasing, Anisotropic Filtering, Temporal Buffers can be quite Memory Intensive); the same goes for NVIDIA who typically adds this as a Segment of their Total VRAM, hence why they have "Odd" Memory Totals.

Games like Forza Horizon 4 are excellent examples., as said Game if it was in DirectX 11, would *REQUIRE* a Minimum of 6-8GB VRAM for the Ultra Settings; but because it's using a quite well optimised DirectX 12 Engine; instead peaks at 3.6GB VRAM Usage.

Things like Real-Time Raytracing (although I'd argue aren't really "Ready" for Consumer / Gaming, at least not on the scale that Gamers would be impressed by) are actually even more Memory Efficient... which is likely why the RTX 2080Ti only had 8GB instead of the 11GB that the GTX 1080Ti has., as you can guarantee that NVIDIA are utilising it for Driver Optimisation / Acceleration in the background even in non-RTX Games.

Which makes sense, a Hardware Ray Engine isn't exactly "Exclusive" to a Rendering Output approach; but is actually a commonly used for most 3D Mathematics "Grunt Work" … so having Cores that aren't tying that to specifically handling Polygon Raster, Shader ALU Operations, etc. could help, just as it could be used to substantially improve Physics and AI Pathfinding.

It's honestly an exciting technology from a Developer Standpoint., given the freedom it provides; but that's just in regards to backend elements that the Gamer is never going to see or notice... how NVIDIA have implemented it and are currently marketing it to Consumer; well that's just weird., and frankly is likely a stop-gap until they develop an actual "Next Generation" Architecture; which they're falling behind AMD in that regard right now, due to the success of their previous Architecture Base.

Personally, I'd recommend waiting until Q2-Q3 2019 … while I don't believe, or should I say I hope AMD aren't planning to confuse consumers even more by again trying to echo their competition (seriously AMD... keep with the RX Vega [CU] or RX Navi [CU] naming convention; it helps you stand out more, make generations much more recognisable and provide you with freedom to release as few or many SKU as you wish to, without it needing to be shoe-horned into a pre-established Tiered Naming Schema)., the one thing that it seems to echo from my Speculation is the Performance and Price Points.

In essence you'll be able to get effectively RX Vega 56 Performance., as quite literally a direct replacement for the current RX 580.

That's exceptionally great value for Money and Performance.

On top of this the Ryzen 3rd Gen (6 Core / 12 Thread, so Ryzen 5 1600 CPU Performance) with Graphics (20CU, so GTX 1050Ti Performance) is set to essentially offer PS4 Performance at the $140 - 160 Price Point without the need for a Discreet GPU... it wouldn't be a "Massive" performance uplift., maybe like +50-60% over what you currently have; but still one that won't break the bank.

So I'd argue it's worth waiting.

View solution in original post

15 Replies

I would wait until you have upgraded to Win10. The new graphics cards, for the most part, do not support anything other than Win10.. Won't work.

0 Likes

Thanks what does windows 10 have over 7?

0 Likes

It depends on who you listen to... The important thing is, Microsoft stops support for Windows7 in 2020.

What are the advantages of Windows 10 over Windows 7? - Microsoft Community

Is Windows 10 better and faster than Windows 7? - Quora

0 Likes

Thanks I never call mickeysoft so that doesn't bother me. I found this on line about windows 10 game mode:

https://www.pcworld.com/article/3187171/windows/tested-windows-10s-game-mode-makes-unplayable-games-...

0 Likes

What are you basing your statement about the new graphics card only being supported by windows 10 on?

0 Likes

The answer was meant to reflect compatibility after Win7 becomes non-supported in a year. Or if a upgrade were to include a APU like Ryzen...or maybe even the upcoming Navi graphics cards. I personally am facing the same thing with a MB that is questionable...which basically means a new computer. New MB, new processor and graphics card (upgraded).

0 Likes
leyvin
Miniboss

You're original post confuses me, as you're basically saying "I'm having no issues... should I upgrade to an 8GB Graphics Card?"

If you're having no problems, such-as unplayable performance then the simple answer is no.

Still with this said, while 8GB VRAM is becoming "Common"., there are very few games that actually benefit from > 4GB VRAM (as this is the most common configuration, thus is what Developers, such-as myself will target).

On top of this keep in mind that as Game Engines switch from Classic API (DirectX 11, OpenGL or Metal) to Modern API (DirectX 12, Vulkan or Metal 2)., we're capable of manually handling resources...

This is a double-edged sword, given most Developers will initially struggle likely opting for Middleware Graphic Memory Management Solutions (such-as what AMD have on their Open Source, GPU Open Platform) resulting in similar VRAM usage., however this is only true in regards to PC Exclusive Developers; as Historically Exclusively Console Developers (such-as myself) have had very similar Resource Management throughout the tenure of 3D on Consoles.

You're already beginning to see this, but it'll become more common for the "Console" version to not actually be developed initially and separately from the PC version(s)., meaning not only will you have Developers who better understand their Engines (as opposed to 3rd Party Port Teams) but have substantially greater experience with Close-to-Metal / Low-Level Optimisation.

When it comes to Resource Management (VRAM)., the difference between a DirectX 11 Approach to a DirectX 12 Approach, can often mean up to 50% Lower Memory Usage (Frame-for-Frame) as well as considerably lower Data Bandwidth Usage.

This means that 4GB VRAM will actually remain more than viable over the next few years... and this will actually extend, at least on AMD with the advent of High-Bandwidth Cache (i.e. ESRAM Buffer, akin to what the Xbox 360 / One has) that separates the Frame Buffers (which at UHD Resolutions, with Anti-Aliasing, Anisotropic Filtering, Temporal Buffers can be quite Memory Intensive); the same goes for NVIDIA who typically adds this as a Segment of their Total VRAM, hence why they have "Odd" Memory Totals.

Games like Forza Horizon 4 are excellent examples., as said Game if it was in DirectX 11, would *REQUIRE* a Minimum of 6-8GB VRAM for the Ultra Settings; but because it's using a quite well optimised DirectX 12 Engine; instead peaks at 3.6GB VRAM Usage.

Things like Real-Time Raytracing (although I'd argue aren't really "Ready" for Consumer / Gaming, at least not on the scale that Gamers would be impressed by) are actually even more Memory Efficient... which is likely why the RTX 2080Ti only had 8GB instead of the 11GB that the GTX 1080Ti has., as you can guarantee that NVIDIA are utilising it for Driver Optimisation / Acceleration in the background even in non-RTX Games.

Which makes sense, a Hardware Ray Engine isn't exactly "Exclusive" to a Rendering Output approach; but is actually a commonly used for most 3D Mathematics "Grunt Work" … so having Cores that aren't tying that to specifically handling Polygon Raster, Shader ALU Operations, etc. could help, just as it could be used to substantially improve Physics and AI Pathfinding.

It's honestly an exciting technology from a Developer Standpoint., given the freedom it provides; but that's just in regards to backend elements that the Gamer is never going to see or notice... how NVIDIA have implemented it and are currently marketing it to Consumer; well that's just weird., and frankly is likely a stop-gap until they develop an actual "Next Generation" Architecture; which they're falling behind AMD in that regard right now, due to the success of their previous Architecture Base.

Personally, I'd recommend waiting until Q2-Q3 2019 … while I don't believe, or should I say I hope AMD aren't planning to confuse consumers even more by again trying to echo their competition (seriously AMD... keep with the RX Vega [CU] or RX Navi [CU] naming convention; it helps you stand out more, make generations much more recognisable and provide you with freedom to release as few or many SKU as you wish to, without it needing to be shoe-horned into a pre-established Tiered Naming Schema)., the one thing that it seems to echo from my Speculation is the Performance and Price Points.

In essence you'll be able to get effectively RX Vega 56 Performance., as quite literally a direct replacement for the current RX 580.

That's exceptionally great value for Money and Performance.

On top of this the Ryzen 3rd Gen (6 Core / 12 Thread, so Ryzen 5 1600 CPU Performance) with Graphics (20CU, so GTX 1050Ti Performance) is set to essentially offer PS4 Performance at the $140 - 160 Price Point without the need for a Discreet GPU... it wouldn't be a "Massive" performance uplift., maybe like +50-60% over what you currently have; but still one that won't break the bank.

So I'd argue it's worth waiting.

Well with the new games coming out and with the prices dropping on the new cards, I thought it would be a good time to upgrade. My XPlane 11 does have various quality levels plus add-ons. Right now I'm using a 4gb card. I wish there was a way in the ATI software where I can see if I'm maxing out the 4 gb or ram or not. It's not overheating  and it's not overclocked so the cooling fan are not coming on. Also what is NAVI ? and I noticed that the RX590 card which costs more has NAVI 10 where the RX580 card has NAVI 12.... So I guess less is more?????? Thanks everyone for the info.

0 Likes

If you press Alt-R (assuming default) while playing a game, then it will bring up the Radeon Overlay.

One of the Tab is Labelled "Performance", just enable that to have an Overlay of the current Details about your GPU(s) during Gameplay, such-as Framerate, VRAM Usage, Fan Speed, Temperature, etc.

It can be customised and is fairly comprehensive.

As for the RX 580 and RX 590., both of these are Polaris Architecture (Polaris 20 and 30 Respectively, the first number is just the 'Revision' / 'Production' Generation).

Navi is the GCN 6th Gen Architecture (Polaris is 4th Gen, while Vega is 5th Gen), which is the "Next Generation"

Thanks! I'll try that and let you know how it turns out!

0 Likes

Well Alt+R brings up the game overlay but nothing show the gpu usuage. I set all to max and it stutters but is playable. I then started turning things off and down until I have good frame rates. I still think santa needs to bring me a RX 590 8 gb card. I noticed I'm playing in DirectX 10.

0 Likes

Ok I turned on Histogram but it doesn't shoe something lke 3.5 gb of vram in usage. etc it show GPU at 300mhz memory 150mhz memory clock 1400mhz

0 Likes

Ok I just downloaded and did a clean install of Adrenalin 2019 and I can run my sim at approx 90% max so I BIG improvement. I let the "advisor" set my setting for what's best for me.

0 Likes
jammerjonn
Adept III

Well it helped my gaming but wattman keeps crashing my system even when just browsing the internet. My system freezes or locks up for about a minute and then after "releasing it" I get a pop up from ATI saying wattman defaults set. I never changed any. I again uninstalled and then ran the clean-up program and reinstalled 18.2.2 and it still happens. I'm trying to find the last software update before the Adrenalin to install since I had no issues with it. I love ATI but it might be time to switch to Nvidia.

0 Likes

Forgot to say Adrenalin 19.1.1 is worse on my system than than 18.2.2

0 Likes