cancel
Showing results for 
Search instead for 
Did you mean: 

General Discussions

GDDR6 Yields ~6% Performance Improvement Over GDDR5

In a review by Expreview cited by TomsHardware comparing the GTX 1650 GDDR6 to the GDDR5 version:

The GTX 1650 Destroyer OC has a 1,485 MHz base clock and 1,665 MHz boost clock, while the GTX 1650 GDDR6 Destroyer OC has a 1,410 MHz base clock and 1,620 MHz boost clock. That's equivalent to a 5% reduction on the base clock and 2.7% on the boost clock.

The GTX 1650 GDDR6 delivered about 2-12.5% performance gains over the GDDR5 version, depending on the benchmark and game. Collectively, we're looking at an average performance increase of 6.4%, based on Expreview's results. 

https://www.tomshardware.com/news/gtx-1650-gddr6-vs-gddr5-performance

1 Reply
leyvin
Miniboss

The benefits of GDDR6 over GDDR5 really are going to depend on bandwidth starvation for the GPU.

For NVIDIA Hardware... in most scenarios it's just not really going to make any difference., as they're designed around the existing bandwidth with a wide enough cache to rarely need to access the VRAM; so really the only performance improvement will stem from Large Read/Write Operations.

(Big Texture Sets or such)

The same is more-or-less true for RDNA as well., but for GCN ... well that's a different story.

Keep in mind they were designed to use double the Cache than they actually shipped with... this means Memory Bandwidth often ends up one of the biggest bottlenecks (esp. for Geometry)., but also keep in mind that GCN doesn't use such an aggressive (overly so on NVIDIA Hardware) Colour Compression.

So GCN also generally uses more bandwidth Per Pixel being Streamed and Cached. 

It's why the idea of the High-Bandwidth Cache was extremely innovative and interesting., but unfortunately was just never fully implemented ... although I'm not sure why, because it could've solved said issue (as it could've acted like ESRAM on Xbox 360/One) which when used right, dramatically reduces the need for Larger Bandwidth to prevent Data Starvation thus allowing better / full utilization of the Hardware and reducing Frame Jitter. 

It didn't "Improve" performance per se... and it's odd that it is here. 

Rather it would greatly improve the Lows, because those typically stem from Data Starvation; thus the Overall Avg. would rise; not to mention you'd be more likely to have consistent framerates. 

I've been curious as to whatever happened to HBC in RDNA., mind that said they have finally provided the Full ISA Cache with the Design; unlike the GCN short-changing of it, so maybe it's less important. 

0 Likes