Hi guys,
I figured this is probably the best place to ask for help, can anybody able and willing help me get a benchmark result for 2x RX480?
I'm looking to see how they behave in Blender rendering and I would be hugely appreciative if anyone could help me put an end to this question?
Been looking at a full AMD rig for a while now and I need a bit of input in this one last aspect.
Cheers, please drop a line if you can help and I'll guide you through the benchmark: BlenchMark | Hardware performance in Blender 3D
Thanks!
V.
Solved! Go to Solution.
The python script does not initialize both GPU's and it's only compiling on one (avg time is 1:36). That needs to get fixed first. CPU render doesn't even run and I can't run a trace on background (dbg) info.
The python script does not initialize both GPU's and it's only compiling on one (avg time is 1:36). That needs to get fixed first. CPU render doesn't even run and I can't run a trace on background (dbg) info.
Even on one it's plenty good info as it scales up perfectly. 2 cards = twice as fast, simple as that.
Yes I just tried at work and it doesn't work on 2 cards, must be something recent, it worked at home a couple of weeks ago...
Many thanks! It helps me gauge where these cards sit in comparison with nVidia as they're quite decent for the price, yours seems to closely match the GTX970 at work so it's been amazing help!
Curious to see what an RX580 performs like, it's £270 on pre-order in UK and should be a nice investment
Cheers!
V.
My initial response to you was that I was going to submit my results; right after I posted that saw the errors and started sifting through the code. It can be fixed, but it will take some time. I'll get it working eventually, but I can't work on it today.
Don't worry about it, I've actually seen myself what the problem seems to be and hopefully get to fix it this evening. Besides, it's supposed to be fixed by the authors, not the users
For a quick crude fix (or to at least get the benchmark to run):
replace line 464
BMRender = BMRender[0] + " (x" + len(BMRender) + ")"
with
BMRender = 2
(where "2" is your number of cards).
Also comment out the next 2 lines:
# BMRender = BMRender.replace(" (Display)", "")
# print(BMRender)
Overall it's sloppy code with vars being interchanged (int to str and vice versa). If you really want to submit data, you'll have to clean up his code
Couldn't help it either I see
Managed to get it rendering as well, thanks for the tips!
Just wanted to see the times, not submit.
As for your full AMD rig - are you looking into a workstation for 3D rendering/modelling? Curious to see what your requirements and evaluation points are.
I jumped on the pre-order of a couple Vega FEs until our systems provider (Dell) releases any news on options etc as I have to write up a case for our 2018/2019 acquisition plans. We were formerly "stuck" with Nvidia/Cuda for most of the render farm, but over the years Ive successfully pushed for a heterogeneous environment with the goal to transition to a more open standards development base. Now we're at a tipping point to go all in as we're just waiting on my evaluation reports for Vega Pro's and to a lesser extend the consumer cards.
For the farm, the first test servers ideally will run on EPYC with 2 - 4 Vega Pros, and workstations running most likely Ryzen 7's to the lower-end EPYC CPUs on single lower to mid-end Vega Pros. Threadripper is now on the radar too, but it all depends on workstation class motherboard + ecc support options we can get our hands on.
Decisions decisions.
"Decisions decisions" sums it up nicely I'd say.
My requirements for this initial rig are a mixed bag unfortunately, mostly because I still get a lot of work through max/vRay as it's still mainstream choice workflow at my workplace and I get a lot of work done at home sometimes - so this needs a beefy cpu. However I have gotten an increasing amount of work outside the office which I exclusively do in Blender - so beefy gpu.
Having said that, I am mostly looking for gpu price-performance ratio for now because I only have about £3000 worth of budget for now and I'm looking to get my own business running. Would love to go for TR mainly for the PCI lanes and quad-gpu support, but it might be a pricey options to start with, x399 mobos might be £700+, Cpu the same, leaves me with less starting gpu power but lots of headroom for scaling up.
So to sum it up, I'm looking for a server to use as workstation but only for the horsepower and scalability Not an efficient day-to-day rig as it will most likely need a lot of cooling and power-hungry, but at least I can get a workstation afterwards and keep this as a server.
I'm sure you had similar situations in the past since you work with render farms. I don't know, it depends on what's available at the end of September, that's when I'll be looking for a purchase.
Do you work in 3D industry as well? Anything I might have seen?
£3000 would definitely give you a head start on a powerful base system, even if you started with 1 GPU, good memory and a Threadripper. Even if you purchased on the low end of the TR line, it should keep its value to upgrade later on. That's one of the attractive parts of the X399 platform (PCIe lanes) and the ability to expand as necessary.
I'm not necessarily in the 3D industry, but I do farm out segments of our hardware to clients. The main reason for a farm is for data intensive modelling/projection for R&D in the medical/pharma, and soon automative industry.