AnsweredAssumed Answered

Goodbye, Radeon, and your false promises.

Question asked by max0x7ba on Mar 9, 2018
Latest reply on May 29, 2018 by m3tho5

I was excited about Vega. They touted it as good at gaming and great for compute. And I waited for it, mostly because I had my top-of-the-range 2-year-old Freesync monitor and wanted to keep using it.

Vega is good for gaming, true, but not the best. I no longer care what future holds for DirectX 12 and Vulkan because few of the top games use that. I want existing software using DirectX 11 and OpenGL to work great on AMD. I cannot trust AMD on promises of great performances in DirectX 12 and Vulkan because their implementations of DirectX 11 and OpenGL has always been sub-optimal and often critisized. But I sure can trust a company that has always had great DirectX 11 and OpenGL implementations that they can make equally great DirectX 12 and Vulkan ones.

Now that I do machine learning, I wanted to use my Vega for its much touted compute capability. All modern machine learning frameworks, such as TensorFlow/Keras, Caffe, Torch can use GPUs to dramatically speed up computations. They all support GPUs out of the box. It was a nasty surprise for me that they all expect the GPU to support CUDA. None of the frameworks can use OpenCL.

AMD is trying to make these machine learning frameworks utilize AMD hardware, but the support is rather poor, see the current state of it over at Deep Learning on ROCm. AMD hardware and OpenCL are currently useless for machine learning, unless one is willing to build their own machine learning framework from scratch.

The fact that I cannot use this record-breaking theoretical compute capability for machine learning is a failure of AMD, no one else's. Even more so because AMD was making a lot of noise about its machine learning capabilities and its Instinct series targeted specifically for machine learning. I am pretty sure now that AMD's "new era of Deep Learning" translates to "none of the existing deep learning software works with AMD". This is the point I am most unhappy about. What is the use of "World’s Fastest Training Accelerator for Machine Intelligence and Deep Learning" if little software can utilise that? IMO, AMD should be throwing money at popular projects to make them support AMD as a 1st class citizen.

Just rage-swapped my RX Vega 64 LC for NVidia 1080Ti and thought I bricked my PC because I could not hear it. Turned out that the air-cooled 1080Ti at 2000MHz is much quieter than the liquid cooled RX Vega 64 LC at 1670Mhz.

And the most amazing thing is that all GPU-accelerated software now just works.

In Primitive Discarding in Vega: Mike Mantor Interview at 8:30 Mike says that Vega exceeds 1700MHz and they run it "at least 1700Mhz", however, I was not able to observe that in games, no. 1700Mhz is about the max my liquid cooled Vega can do when under-volted, with stock voltages it hovers around 1650MHz. The VRAM goes from 945Mhz to 1050MHz easily on my sample though, with no other adjustments.

Just a bit unhappy that AMD marketing got me with their hot air. Stop saying that AMD marketing isn't good or effective. #makesomenoise