Deep learning and artificial intelligence have been huge topics of interest in 2016, but so far most of the excitement has focused on either Nvidia GPUs or custom silicon hardware like Google’s TensorFlow. We know Intel is working on upcoming Xeon Phi-class silicon to throw at these problems, and AMD wants to enter the market too, courtesy of a new lineup of graphics cards based on three different product families. AMD will also offer its own software tools and customized libraries to accelerate these workloads. It’s still fairly early days for the AI and deep learning markets, and AMD could definitely use the cash — but what’s it going to bring to the table?
First up, let’s talk about the accelerators themselves. AMD is deploying three new cards under its new Radeon Instinct brand, from three different product families:
The MI6 is derived from Polaris, albeit Polaris running at a slightly lower clock than the boost frequencies we saw on consumer parts (total onboard RAM, however, is 16GB). The MI8 is a smaller GPU built around R9 Nano and clocked at the same frequencies, with the same 4GB RAM limitation. (It’s not clear how much AI and deep learning workloads depend on RAM, but AMD presumably wouldn’t sell the chip into this market if it didn’t have a viable use-case for it. Finally, the MI25 will be a Vega-derived chip that’s expected to be significantly faster than the other two cards, but AMD isn’t giving any details or information on that core yet. AMD hasn’t specified a ship date for any of these products beyond H1 2017, but we’d expect the company to bring its MI6 and MI8 cards out first, to test the waters and establish a foothold in the market.
This is more marketing than substantive
IBM has some neural processors for researchers to use, but you need a lot of them to do anything so not many can afford 10,000 neuron chips