novicefedora

What will be AMD's answer to NVIDIA's dominance in AI?

Discussion created by novicefedora on May 4, 2020

I hope this to be an extension of my question on stackexchange, where I asked why this Radeons aren't widely used in AI. hardware evaluation - What is the reason AMD Radeon is not widely used for machine learning and deep learning? - Artific… 

 

There is only one answer to that question there, and that user puts the blame entirely on AMD's lack of APIs and software. It seems that NVIDIA opened up general purpose computing on it's GPUs through CUDA long time ago, and AMD or ATi didn't, this made researchers prefer NVIDIA and they wrote more libraries for it, so AI research became easy on NVIDIA. Now they are so invested in NVIDIA, they don't want to waste their time time writing new libraries for Radeon. Seems like Radeon will loose out in AI if this situation doesn't change.

 

What are AMD's plans for AI? I know AMD is not as large as NVIDIA or Intel, so it's resources have to be well utilized on CPU and GPU and it may not have time, money and energy to mobilize it's engineers to write APIs, software or something similar to CUDA to allow AI libraries to be created or ported to it's Radeon platform.

 

AI seems to be the next lucrative market and AMD has a competitive technology to take on NVIDIA but seems unable to make headway there. Looking at discussion on Quora and other places, a lot people have hopes on AMD to provide an alternative to NVIDIA in AI, some are saying an ASIC or FPGA solution be better than GPU solutions. Being a lay person, I don't understand if it is better for AMD to create a new hardware for AI(like ASIC based) or write APIs, software to port existing libraries.

 

My math is not so good to understand much of neural networks, etc. What actually prompted me to ask this question here is, I wanted to try AI as a way to familiarize myself with neural networks, how it works, what it is, setting up the environment, etc and even for this, most of the seem to require CUDA and I can't try many interesting neural network experiments with Radeon.

 

Is ROCm going to make it easy?

Outcomes