2 of 2 people found this helpful
Your best bet at this moment would be Alphacool, they have a programme on whereby if you have a GPU that there is no waterblock out for, they'll manufacture it for you and you get one for free I think (check for yourself just to make sure please). It's not exactly a custom product, more like helping them add to an increasing collection.
Might be a better incentive if you're up for a purchase.
May I ask, is there a specific reason for going with Pro Duo?
I mostly wanted to go with the pro duo because of the memory (32GB!!!). I have some tasks that I wanted to work on that will highly benefit from the large amount of VRAM.
So with a Threadripper and 2 pro duos, I'll be able to get 64GB total of graphics memory.
I already have the threadripper 1950X and Asrock X399 Professional Gaming board and was looking for GPUs to complete the set.
My only other choice is Vega Frontier but with 2 of those I can only get to 32GB VRAM with 2 GPUs
The Pro Duo was a perfect match for price and memory.
The Firepro W9100 would have worked too but the price is a bit too high for me.
1 of 1 people found this helpful
I assume, since you're going for such a high-end set up, that you know what you're doing. I will point out, just for safety's sake, that GPU memory is not cumulative in most applications, unless you have something specific that is built and tailored for such independent use. For instance, in OpenCL setups that involve 3D rendering, memory will always stick to the lowest card, so if you have 16GB, it will not add up to 32 if you have two. The second point is that you seem to be forgetting about HBCC.
As a vega FE owner (2nd one on the way), if you want anything tested I'd be glad to help with that. HBCC increases your available memory. So, for instance, because I only have 16GB DDR4 for now, it can only use 8GB as additional memory, which means my Vega FE effective memory jumps up to 24GB instead of 16. This is the beauty of HBCC, you can assign RAM directly to GPU, which is MUCH much faster than HDD/SSD/NVMe paging.
Radeon SSG is built exactly on that principle - 2TB strapped directly to HBCC on the card. At the base, it's still 16GB HBM2.
Having said that, if you have anything that you are able to share so I can test for you and you know won't fit in 16GB, I'd be happy to test that out for you. For me it's quite simple: go in blender, create the particle setup from hell and watch it use 12GB without a hitch by doing nothing else.
Seems like HBCC might help me achieve what I want using Vega FE. Does HBCC with RAM use SVM underneath to provision the extra memory. I'm testing a very specific setup that would benefit a lot by avoiding the PCIE bottleneck. I have some custom algorithm that reduces the inter gpu communication just to avoid the PCIE bottleneck. But its very specific to my use case and the intended use is custom not mainstream.
I would be pretty surprised if you manage to saturate PCI-e bandwidth, however it sounds like you already have
I would love to be able to answer that specific question but I am fairly high-level as far as deep learning goes, that sounds more like a question for the Devgurus section to answer, and I'm sure someone will be able to answer that. All I know is that HBCC basically transforms the on-board HBM2 as last-level cache, if that's of any relevance? There's also a fair bit of information on HBCC in the Vega Whitepaper, maybe that will answer your question?
I'm really sorry, I'm more visually creative oriented with my compute power use although I am deeply fascinated by recent developments in deep learning and have dabbled a bit in Python and C++. Still, nowhere near your skill level!
Let me know if there's anything else to help with.