cancel
Showing results for 
Search instead for 
Did you mean: 

Discussions

GlenWalker
Journeyman III

Dual W7700 or single W6800

Hello Everyone! 👋

 

I’m aiming to build a workstation soon for a research project using LLM and due to various constraints I am fitting it into a 3U server rack. This has lead me to conclude I need a card or cards from the Radeon Pro lineup but I keep going around in circles on the Internet choosing the best one.

 

If cost weren’t an issue I would go with the W7800 but that is about £2500 currently. A single W6800 is currently about £1500 or I could get a W7700 for £1000 and potentially upgrade to two W7700s in the future.

 

So the question really is: will the W6800 be better for LLMs on ROCm even though it is from the previous generation (I’m thinking the 32GB RAM will be the advantage but the RDNA2 core is slower and lacks AI specific modules right?) or should I try to get the latest tech and go for one or two W7700s?

 

Appreciate any thoughts/comments you all have!

0 Likes
2 Solutions
fsadough
Moderator

W6800 does not have any AI accelerator. So, for LLAMA you are better off with W7700

View solution in original post

Yes, ROCm is capable of scaling across multiple GPUs. Make sure you acquire a mobo with MultiGPU x16/x16 support. With x8/x8 you might be limited in bandwidth.

View solution in original post

4 Replies

AMD Moderator for Professional GPU cards can assist you with your problem @fsadough 

0 Likes
fsadough
Moderator

W6800 does not have any AI accelerator. So, for LLAMA you are better off with W7700

GlenWalker
Journeyman III

Thanks fasdough!

 

Does ROCm scale/span effectively across multiple GPUs?

 

I ask because I have the option of saving on the rest of the system if I focus on a single GPU (i.e, I get a W7700 and look to replace it with a W7800 in the future) or should I go with the more expensive motherboard/CPU combo that supports multi GPUs (i.e, with x8/x8 or x16/x16 slots) and focus on getting a second card?

 

0 Likes

Yes, ROCm is capable of scaling across multiple GPUs. Make sure you acquire a mobo with MultiGPU x16/x16 support. With x8/x8 you might be limited in bandwidth.