While gaming GPUs get most of the attention, workstation graphics cards are essential for many kinds of engineering, scientific, AI, and multimedia tasks. So I was eager to get a chance to test out a pre-release version of AMD’s new Radeon Pro W5700 workstation GPU ($799).
First, it was a relief to get a new GPU that sticks to a similar power envelope to prior versions. In my case, it was simple to swap it in for an Nvidia GTX 1080, because it doesn’t require more power and uses similar PCIe power plugs (8 + 6 pins versus the 8 + 8 pins on my eVGA 1080 FTW). To help it fit connectors for six monitors, the back of the card has five mini-DisplayPort 1.4 ports and one USB-C connector. Fortunately, it also comes with a couple of DisplayPort dongles and a DVI dongle. The mini-DisplayPort ports support DSC, so they can drive displays up to 8K @ 60fps. The USB-C port can also supply up to 15 watts of power.
Inside, the “Navi” GPU is built on AMD’s new RDNA architecture using a state-of-the-art 7nm process to cram its 10.3 billion transistors onto a 251mm die. The GPU has 36 compute units, 2304 stream processors, and a base clock of 1183 MHz, along with 8GB of GDDR6 RAM. As far as performance, AMD claims up to .56 TFLOPS at FP64, 8.89 at FP32, and 17.8 at FP16. Those specs put it behind the WX 8200, except for an improved fill rate and reduced power consumption (205 watts maximum versus 230 watts). However, the W5700 benefits from AMD’s new RDNA, which the company says allows it to execute 25 percent more instructions per clock cycle. The W5700 is also ready for PCIe 4, with a theoretical bandwidth of up to 24.6 GBps available.
Hardware encoding and decoding support is integrated through the Radeon Media Engine and includes encoding for H.264 and H.265, as well as decoding for those codecs and VP9.
I’m usually pretty skeptical of the utilities bundled with GPUs, but AMD includes some interesting ones with the W5700. First is ReLive, which allows the streaming of VR content to a wireless VR headset. I didn’t have one around to test it out personally, but I can see that it provides for a nice alternative to the thicket of wires that connect my Oculus now. There is also Image Boost, a clever way to increase the apparent sharpness of non-4K monitors. Turning it on lets you choose a virtual display resolution (up to 4K) and allow the GPU to rescale results. I was testing on a system with one 4K and one 1920 x 1200 monitor, so I used the feature on the smaller monitor. After enabling it, I could more easily read small text on the display. Images did, in fact, appear somewhat sharper.
AMD provided us with several sets of relevant benchmark results for the W5700. The first set compares it to two older GPUs that it is designed to replace — the Nvidia’s Quadro P4000 and AMD’s own Radeon Pro WX 7100 — and shows solid performance improvements across the board:
If your GPU is more than a year or two old there are some impressive performance gains from upgrading to a W5700
So, as you might expect, you can get more for your money now than you could two years ago. For those evaluating current model cards, AMD compared the W5700 with the similarly priced current model Nvidia Quadro RTX 4000 GPU. It certainly holds its own, but doesn’t blow it away:
One area where AMD believes the W5700 shines compared with the Quadro RTX 4000 is efficiency when both the CPU and GPU are in use. According to their benchmarks, when running both a CPU intensive task like rendering, along with floating-point, the W5700 achieves a much higher frame rate. I look forward to testing that for myself in video processing (once Neat Video is able to add support for the W5700) where some tasks like noise reduction can run on the GPU while encoding runs on the CPU.
One impressive result from W5700 benchmarks is improved efficiency when both GPU and CPU tasks are being run in parallel
AMD has done an excellent job of getting broad design-industry support for its ProRender software and the W5700, including headline applications like 3DS Max, Maya, Solidworks, Creo, Blender, Cinema4D, and others. So design tool users should feel right at home with a W5700, and be assured of getting excellent results for a value-priced workstation GPU.
The story is different for those who need GPGPU solutions like machine learning. It’s certainly not a secret that AMD has been playing catch up in supporting the tools and frameworks that AI developers need. It’s been making progress, but a large number of projects are still built natively on Nvidia’s CUDA, and won’t run on AMD’s GPUs. In some cases, there are OpenCL alternatives, especially for those running Linux. But for Windows users, the news isn’t as good. Google’s popular Tensorflow requires CUDA on Windows, for example. In my case, my neural network code is all built on either Tensorflow or Mathematica — which also requires CUDA — so I wasn’t able to benchmark the W5700’s training performance compared to my existing Nvidia GPUs.
When I asked AMD for advice they explained that their desktop GPUs are not intended for machine learning tasks, just for graphics, and that for machine learning I should look to their datacenter offerings. I’m sure that makes sense to them strategically, but it means that if you want your workstation to do double duty running both traditional design tools and enable some machine learning projects, an AMD GPU is probably not a good choice.
If your productivity is bottlenecked by rendering performance, and your GPU is more than a year or two old, you’ll clearly see some noticeable speedups by upgrading to a W5700. On the other hand, if you already have a current model workstation GPU, the W5700 isn’t going to blow you away. In either case, if AI is an important part of your workload, be cautious about upgrading unless you can assure yourself that your tools will run on the W5700. If you’re shopping for a workstation GPU, the $799 price tag of the W5700 puts it very close to Nvidia’s Quadro RTX 4000. And at least according to AMD’s benchmarks, it offers some advantages in floating-point and especially in combined CPU-plus-GPU performance.
My RX 480 8GB has the same number of compute units. I have done a lot of testing with video editing at 3840x2160 and my rig seems to be able to handle it. My R5 2400G is a basic 4 core 8 thread unit which is adequate for my needs. I have 24GB of memory now and I am considering options as needed.
AMD image sharpening is interesting as it can do much to improve down sampled images and video.
Is possible this card may be better well received in the Apple market than Windows, as it does give additional benefits over the previous Vega and RPD cards.
Also, makes me wonder why Radeon Pro Image Boost hasn't made its way to consumer Radeon cards, since it appears to be a more advanced version of Virtual Super Resolution combined with a form of Radeon Image Sharpening.