I mentioned this a while ago, and was told it was because adaptive sampling wasn't being used in multi GPU mode in that version, but would be in the next release. I can say it still is the same, and I thought maybe it was just my hardware, but then I tried it with LuxCore render, and it is actually what I would expect: LuxCore doesn't display the finished render time, but they show how many Samples per sec. and it represents as I would expect, CPU only=2.9m/s, CPU+1GPU= 4.9m/s, and CPU+2GPUs=8.3m/s (more samples per second is faster) It is also consistent if I use the cpu and my slower GPU it's a bit slower than if I use the cpu with the faster gpu instead. Basically the more devices the faster the render time, but with ProRender, it's like this:
500 sample test
RX5700 XT = 30.32 sec.
RX5700 XT+RX580 = 27.92 sec.
RX5700 XT+RX580+CPU = 31.79 sec.
So adding the RX580 to the main GPU only improves speed by a very small margin, and in longer renders that spread gets even less. Then adding the CPU to the both of them actually hurts the speed, and that difference actually gets greater on longer renders.
I don't know why this happens, but I have observed that when LuxCore is doing a final render, on all three devices, it actually uses all threads of the CPU to 100%, and BOTH GPUs are almost 100% use, and it stays that way until it is finished, and the 5700 is using 160-210watts while the 580 is using 110-180watts through out the entire rendering process.
Now with ProRender, with all 3 devices used, the cpu only uses about 30%, and each gpu does go to almost 100%, but only alternating, each GPU for about 3-4seconds, they go back and forth, but never together, and the CPU seems to be doing similarly, but not as much. Whereas the GPUs drop to 0% when the other one goes up to 100%.
I have had other people here say that, at least with 2 matching Nvidia cards, that they saw about 50% cut in a render time with both cards over 1 GPU, I don't know but it would be nice to get the benefits of having my second GPU, because tossing the load back and forth doesn't really seem to be benefiting anything, It sure would be great to be able to get a bit more out of having both GPUs in my system.
Hi I have only been involved in this for a couple months. I found that with the prorender 2.2 patch that the Dual Gpus increased rendering with px sizes of 896x896 the best, but this is with dual RX580's. This is the last update not the one expected around 24/12/2019. A difference may be with you using the RX5700 XT, which I don't have but have heard it's RT cores may be faster and may be different trying to work with the RX580. Also the CPU + GPU only rendered slightly faster on my 6 core cpu but the colouring of the rendered squares are vastly different between the CPU and GPU. Reading the posts AMD is hoping to do a patch soon to improve the dual GPU and a few other things if this didn't help.
Thanks for the response. In all circumstances the 5700 renders much quicker alone than the 580 does on it's own for sure, and even in Cycles using both GPUs together isn't always faster, so I don't expect too much. Although the main reason I felt the need to ask this question was the fact that LuxCore Render is always able to push ALL THREE to 100% use simultaneously, instead of splitting less load, or trading off between the devices. Unfortunately since LuxCore uses it's own unique node system, it's kind of a hassle to try and compare side by side whether it's doing a better job or if it's just maxing things out, but not doing it efficiently. But for free products, I say they are all awesome and ProRender is doing pretty good, it only keeps getting better!
No worries, if it isn't a driver or coding efficiency issue, the only thing to check I have added is the use of the updated Radeon GPU software and started using the chilled air cooling again for long intensive renders with multi gpus. Managed to drop the temps from 75+ degrees back to 55-65 degrees underload. I found the GPUs did not throttle back as much to preserve their life. Maybe with the radeon render code for blender your RX580 may be peaking temp wise more often to keep up with the 5700 and then gimping itself, compounding the issue but keeping the gpu from frying itself.
This issue is most certainly in no way related to thermal issues or throttling. I have zero concern for fan noise in my machine, therefor I have a VERY steep fan curve on both my GPUs, not to mention 7 additional chassis fans. even when LuxCore render is pushing both GPUs and the CPU at 100% load without interruption, still neither GPU breaks 65-70 degrees. So when ProRender is using both GPUs, neither one ever gets past 55-60 degrees either.
This isn't all that big of an issue for me, but just something I hope to be improved in the future maybe.
Thanks. This is actually something we're working on improving in future versions of the RPR sdk. I talked about this a bit here:
It's not something that will change in a week but we're working on it!