The benefits of GDDR6 over GDDR5 really are going to depend on bandwidth starvation for the GPU.
For NVIDIA Hardware... in most scenarios it's just not really going to make any difference., as they're designed around the existing bandwidth with a wide enough cache to rarely need to access the VRAM; so really the only performance improvement will stem from Large Read/Write Operations.
(Big Texture Sets or such)
The same is more-or-less true for RDNA as well., but for GCN ... well that's a different story.
Keep in mind they were designed to use double the Cache than they actually shipped with... this means Memory Bandwidth often ends up one of the biggest bottlenecks (esp. for Geometry)., but also keep in mind that GCN doesn't use such an aggressive (overly so on NVIDIA Hardware) Colour Compression.
So GCN also generally uses more bandwidth Per Pixel being Streamed and Cached.
It's why the idea of the High-Bandwidth Cache was extremely innovative and interesting., but unfortunately was just never fully implemented ... although I'm not sure why, because it could've solved said issue (as it could've acted like ESRAM on Xbox 360/One) which when used right, dramatically reduces the need for Larger Bandwidth to prevent Data Starvation thus allowing better / full utilization of the Hardware and reducing Frame Jitter.
It didn't "Improve" performance per se... and it's odd that it is here.
Rather it would greatly improve the Lows, because those typically stem from Data Starvation; thus the Overall Avg. would rise; not to mention you'd be more likely to have consistent framerates.
I've been curious as to whatever happened to HBC in RDNA., mind that said they have finally provided the Full ISA Cache with the Design; unlike the GCN short-changing of it, so maybe it's less important.