cancel
Showing results for 
Search instead for 
Did you mean: 

Discussions

abdurrahmanalhakim
Journeyman III

How good is AMD's ROCM on Windows?

I'am planning to upgrade my GPU to further learn about ML/DL. I'am planning to buy Nvidia RTX 3080 but it only has 10GB of VRAM which according to videos/information that I saw it's not really enough for ML/DL. However for AMD card they have rx 6800xt with 16GB of VRAM and also they're currently developing their ROCm for windows. Is it safe to use rx 6800xt with ROCm on windows or do I just stick with RTX 3080? how is the cuurent performace for amd ROCm on windows in comparison to using NVIDIA?

1 Solution
banbalin
Journeyman III


ROCM is a technology that is still in its early stages on Windows. Just a few days ago, a small portion was released, but it certainly requires a lot of work and development, which will most likely take the company more than a year to turn it into something decent and usable.

If we compare it to CUDA, even on Linux, there are numerous issues. Graphics cards in the field of artificial intelligence are lacking and leave much to be desired. If you're only considering changing your card due to VRAM, let me tell you that, for instance, in AI work environments like stable diffusion, an N.VID1A graphics card with half the RAM of an AMD one performs much better and has better workload management than one with double the RAM from AMD.

Even in Linux, where ROCm has been in development for many years, there are many instability issues that make solving certain problems with specific technologies very challenging.

I have an RX 6750 XT with 12GB of VRAM, and I've encountered too many issues with stable diffusion. Not to mention Windows, where practically nothing can be done. And the little that can be done is absurdly slow – it's like 1% of what you could achieve with Linux. And even when they released ROCm for Windows, there's no compatibility for many technologies – take a look at the documentation.

It depends on your work environment, but if I were you, I'd get a 4070 Ti or a 4070. Just the fact that they represent a generational leap in hardware for AI purposes makes them infinitely superior to our cards.

In my specific case, my card competes with a 3050 Ti – quite unfortunate, isn't it? After my experience with various software, I feel that AMD might be fine when you have very little money and only care about gaming (without raytracing).

View solution in original post

4 Replies
banbalin
Journeyman III


ROCM is a technology that is still in its early stages on Windows. Just a few days ago, a small portion was released, but it certainly requires a lot of work and development, which will most likely take the company more than a year to turn it into something decent and usable.

If we compare it to CUDA, even on Linux, there are numerous issues. Graphics cards in the field of artificial intelligence are lacking and leave much to be desired. If you're only considering changing your card due to VRAM, let me tell you that, for instance, in AI work environments like stable diffusion, an N.VID1A graphics card with half the RAM of an AMD one performs much better and has better workload management than one with double the RAM from AMD.

Even in Linux, where ROCm has been in development for many years, there are many instability issues that make solving certain problems with specific technologies very challenging.

I have an RX 6750 XT with 12GB of VRAM, and I've encountered too many issues with stable diffusion. Not to mention Windows, where practically nothing can be done. And the little that can be done is absurdly slow – it's like 1% of what you could achieve with Linux. And even when they released ROCm for Windows, there's no compatibility for many technologies – take a look at the documentation.

It depends on your work environment, but if I were you, I'd get a 4070 Ti or a 4070. Just the fact that they represent a generational leap in hardware for AI purposes makes them infinitely superior to our cards.

In my specific case, my card competes with a 3050 Ti – quite unfortunate, isn't it? After my experience with various software, I feel that AMD might be fine when you have very little money and only care about gaming (without raytracing).

thank you, your answer cleared out my dilema. While i can't argue about NVD1A supremacy in ML/DL I really hope amd ROCm getting more and more tuned in the future. I have been using amd since r7 240 for gaming when I was a kid and hope someday AMD can get toe-to-toe with NVID1A in ML/DL.

I feel the same way, i got baited by the VRAM amount, but when i went down to the field i found so many issues that i ended returning the card. for just gamming the card performed very very nice also the XFX build quality was something very nice i loved the look of that card...

I also was unable to pass through the gpu on KVM, never had a problem with nvidia, just a heads up for others i think is better to know where you are goin in to instead of buying and returning, i really like AMD to be strong and competitive but theres a long road ahead still in the IA field.

  Most people do not know this but AMD is a major leader in AI they just do not put there eggs in one basket like Nvidia does but anyway if you guys are looking for top notch AI cards don't go to Nvidia go to AMD Instinct cards that's were your top of the line Military level AI stuff is at along with other state of the art AI stuff that will blow your mind and I personally picked up a cheap ebay Instinct card and the amount of AI stuff I can do is amazing and beats out any Nvidia card.

Adam J Martin