1 of 1 people found this helpful
The python script does not initialize both GPU's and it's only compiling on one (avg time is 1:36). That needs to get fixed first. CPU render doesn't even run and I can't run a trace on background (dbg) info.
Even on one it's plenty good info as it scales up perfectly. 2 cards = twice as fast, simple as that.
Yes I just tried at work and it doesn't work on 2 cards, must be something recent, it worked at home a couple of weeks ago...
Many thanks! It helps me gauge where these cards sit in comparison with nVidia as they're quite decent for the price, yours seems to closely match the GTX970 at work so it's been amazing help!
Curious to see what an RX580 performs like, it's £270 on pre-order in UK and should be a nice investment
My initial response to you was that I was going to submit my results; right after I posted that saw the errors and started sifting through the code. It can be fixed, but it will take some time. I'll get it working eventually, but I can't work on it today.
Don't worry about it, I've actually seen myself what the problem seems to be and hopefully get to fix it this evening. Besides, it's supposed to be fixed by the authors, not the users
2 of 2 people found this helpful
For a quick crude fix (or to at least get the benchmark to run):
replace line 464
BMRender = BMRender + " (x" + len(BMRender) + ")"
BMRender = 2
(where "2" is your number of cards).
Also comment out the next 2 lines:
# BMRender = BMRender.replace(" (Display)", "")
Overall it's sloppy code with vars being interchanged (int to str and vice versa). If you really want to submit data, you'll have to clean up his code
Couldn't help it either I see
Managed to get it rendering as well, thanks for the tips!
Just wanted to see the times, not submit.
As for your full AMD rig - are you looking into a workstation for 3D rendering/modelling? Curious to see what your requirements and evaluation points are.
I jumped on the pre-order of a couple Vega FEs until our systems provider (Dell) releases any news on options etc as I have to write up a case for our 2018/2019 acquisition plans. We were formerly "stuck" with Nvidia/Cuda for most of the render farm, but over the years Ive successfully pushed for a heterogeneous environment with the goal to transition to a more open standards development base. Now we're at a tipping point to go all in as we're just waiting on my evaluation reports for Vega Pro's and to a lesser extend the consumer cards.
For the farm, the first test servers ideally will run on EPYC with 2 - 4 Vega Pros, and workstations running most likely Ryzen 7's to the lower-end EPYC CPUs on single lower to mid-end Vega Pros. Threadripper is now on the radar too, but it all depends on workstation class motherboard + ecc support options we can get our hands on.
"Decisions decisions" sums it up nicely I'd say.
My requirements for this initial rig are a mixed bag unfortunately, mostly because I still get a lot of work through max/vRay as it's still mainstream choice workflow at my workplace and I get a lot of work done at home sometimes - so this needs a beefy cpu. However I have gotten an increasing amount of work outside the office which I exclusively do in Blender - so beefy gpu.
Having said that, I am mostly looking for gpu price-performance ratio for now because I only have about £3000 worth of budget for now and I'm looking to get my own business running. Would love to go for TR mainly for the PCI lanes and quad-gpu support, but it might be a pricey options to start with, x399 mobos might be £700+, Cpu the same, leaves me with less starting gpu power but lots of headroom for scaling up.
So to sum it up, I'm looking for a server to use as workstation but only for the horsepower and scalability Not an efficient day-to-day rig as it will most likely need a lot of cooling and power-hungry, but at least I can get a workstation afterwards and keep this as a server.
I'm sure you had similar situations in the past since you work with render farms. I don't know, it depends on what's available at the end of September, that's when I'll be looking for a purchase.
Do you work in 3D industry as well? Anything I might have seen?
£3000 would definitely give you a head start on a powerful base system, even if you started with 1 GPU, good memory and a Threadripper. Even if you purchased on the low end of the TR line, it should keep its value to upgrade later on. That's one of the attractive parts of the X399 platform (PCIe lanes) and the ability to expand as necessary.
I'm not necessarily in the 3D industry, but I do farm out segments of our hardware to clients. The main reason for a farm is for data intensive modelling/projection for R&D in the medical/pharma, and soon automative industry.
Yes, £3000 will be good for starters and I like the appeal of this new generation of platforms (AMD and Intel alike). So yes, I will probably end up going for a high-end motherboard so I can expand later on as possible. I think future-proofing is the most important factor to consider right now. Vega's are a bit pricy for what they offer, reaaaaaally looking forward to benching one and see where it stands. That's if the miners don't delapidate the stocks on this as well... Still trying to get my hands on RX580 somewhere as the price is good, but ZERO stocks anywhere (reminds me of OnePlus phones and their stock shortage...), I even looked in Croatia and Hungary =)) and Vega bordering the £1000 price tag, seems a bit out of reach even compared to a 1080Ti for instance, while the gains seem less worth the price.
Anyway, we'll see, I'll post something as soon as a decision is in range but until TR+motherboards comes out it's the waiting game for me...
P.S. Interesting stuff you're doing, would be crazy curious to see what it's all about! Hopefully I'll get something going myself soon... been in the 3D industry for 10 years now, time to kick things up a notch!
1 of 1 people found this helpful
I just saw this post.
I have been testing and using the Blender 2.79 Release Candidate 2 and MultiGPU runs great for me without any hacking of scripts or code.
I was previously running version 2.78c. I could only get one of my GPU's to run on that version of the code.
I am seeing significant performance improvements over and above just the ability to run Dual GPU's instead of Single.
I understand this is not a production version of Blender but you might want to download it and take a look.
You can get information on the Blender 2.79 Release Candidate 2 code here:
Dev:Ref/Release Notes/2.79 - BlenderWiki
This Blender Release 2.79 contains many updates and improvement for AMD Cards.
The Release Notes are here: Dev:Ref/Release Notes/2.79 - BlenderWiki
In this release:
- Cycles: Built-in Denoising, Shadow catcher, Principled shader, AMD OpenCL optimizations.
- Faster AMD OpenCL rendering and feature parity with NVidia CUDA
I hit a problem running the "Production Benchmark" in Blender 2.79 RC2. It hangs on me if I try to run GPU Compute on Single or Dual GPU.
Blender 2.78c will complete the render on a single GPU on the same system.
I am investigating and initially looking for help & advice here, just in case you hit similar issues, and you might find the thread useful if I get some answers:
workflow - Is Blender 2.79_RC2 supposed to be able to render "Production Benchmark" in Multi GPU? - Blender Stack Exchan…