Showing results for 
Search instead for 
Did you mean: 

Graphics Cards

Journeyman III

Lumion 10 rendering performance with Radeon Vii

Hi guys,

I am running a CAD workstation with a TR 2970WX, 2x Radeon Vii, with 64gb of 3200mhzC16 RAM.

I got the Radeon Vii for Vectorworks 2020 for Architectural work for my startup firm, and large BIM models replacing a 1080ti which had problems with Vram overfilling and generally being unstable, its been a great replacement for this being able to open lots of files without crashing and being able to import massive non-optimised models which sometimes is unavoidable in Architecture, I bought a second card so that I can run Lumion 10 on a separate monitor and render scenes independently of the card running CAD. This setup works very well. I can now render a scene or a video of the model with one card maxed out with the other working freely doing other stuff.

However after a little more research I'm a little frustrated with the Radeon Vii performance in Lumion 10 compared to other equally priced cards, mainly the rendering performance.

Now there are two aspects to performance in Lumion.

1. The build mode FPS.

They recommend running only in 1080p as its a heavy program 1440p and 4k are not worth the effort even with a Titan RTX. 

I'm getting 32 fps with the 3 star 100% resolution settings, a 1080 ti is around 42fps by comparison and a 2080ti is 60fps or just under. This isn't great for the Radeon, but its ok and perfectly usable to work in. 

2 Rendering. 

This is where the Radeon Vii really is getting let down. It's getting beaten by Nvidia 1080 and RTX 2070 cards, not even close to a 1080ti. The 2080ti has almost exactly double the performance in rendering. Radeon Vii is basically hovering around the performance of a 1070! 

I just think there is someting seriously wrong here, and the cards performance is not being leveraged at all.

BTW they do a benchmark using one of the example house scenes as it seems to be the only way to measure performance properly the built in benchmark and passmark scores are irrelevant, benchmark process is below:

How To Compare:

1. Make sure that the monitor resolution to set to 1920x1080 on the screen that the Lumion window is on.

2. Open the Settings screen in Lumion and use the following settings:

  • Editor Quality: 3 stars
  • Enable high-quality terrain: On
  • Enable high-quality trees: On
  • Editor Resolution: 100%

3. Load the Example Scene - Villa Cabrera. Don't move the camera.

4. In Build Mode, move the mouse cursor to the white square in the top right corner. What is the 'FPS' number?

5. Go to Movie Mode and render Clip 1 to an .MP4 file. Render settings: 1920x1080, 5 star quality, 30 frames per second.

6. After it's rendered about 10 frames, hold down the CTRL key and let us know how many seconds it takes to render each frame. Alternatively, you can render the whole clip and note the total and per frame render times.

7. Repeat the render steps 5 and 6 with resolution at Ultra HD (4K) 3840x2160.

8. A comparative test in Photo Mode of Photo 1 at Poster 7680x4320 would also be great.

The benchmarks below give an idea, however there are no scientific benchmarks done on one system yet.

Radeon Vii :

4. 32FPS

5. 7.5 seconds per frame HD quality

7. 26.5 seconds per frame 4k quality

8. 3:30 poster quality render


4. 42fps

5. 4.5 seconds per frame HD quality

7. 21 seconds per frame 4k quality

8. 2:10 poster quality render


4. 57fps

5. 3.4 sec per frame HD

7. 14 sec per frame 4k

8. 1:30 poster render

I've discussed this with Lumion on their forum, and I'm going to keep asking, but by the sounds of it there stance is that it is simply what it is and they don't actually work on optimising for any cards and the results are what they are.

A little bit frustrating to be honest as I bought this card for the purpose of rendering. I am not massively bothered as I can render things now without affecting my other work. I.e. for the price of one 2080ti i have 2 Radeon Vii and more efficient workflow.

Is there anything that can be done about optimising in the settings from AMD's end?

Lumion 10 apparently uses the Direct X 11 API

3 Replies

What AMD Driver do you currently have installed?

Latest AMD Driver for Radeon VII - 12/12/19 : 

What PSU wattage are you using?  According to a PSU website, you need a minimum PSU of 1000 watts to run two Radeon VII plus you have installed a CPU that also can use a large amount of wattage under heavy loads.

Reason why I asked maybe the GPU Cards are being throttled or the CPU is being throttled under heavy rendering.

Lumion 10 Recommended System Requirements: 

I know you are aware of this already, but have you checked out what type of performance you are getting from the Radeon VII using G3DMARK and TR Rating using CPUMARK to see if it corresponds with the above mentioned scores? Are these the Benchmarks you posted on your original Post?

Does Lumion 10 have any settings to optimize the rendering?

Just throwing out some basic stuff.



Benchmarks in the OP are just from the Lumion software with the method on their forum which I posted above. I'll need to perform some Passmark benchmarks (which is what that spec you posted refers to) over the weekend when I get some time.

Yes fair point, I was running 19.12.01 driver, for power a Corsair HX1200i i.e. 1200w and monitoring the frequencies in tuning its sitting at 1800mhz during the render. Although its unlikely that I will max out both GPUs and CPU at any one time. It shouldn't be a problem with this power supply.

Lumion has very little tuning options aside from the filters you apply. For single image renderings there is only the choice of resolution, the only real tuning options are the various filters you apply which affect the quality and length of time the render takes. 

For videos there is a 5 star rating in quality, choice of resolution and frame rates. 

Practically speaking, to optimise render time you would reduce some of the filter qualities or lower the resolution to speed up the process, or frame rate for video, that's fine but then you are losing quality over the competition of course. The point is the benchmarks mentioned in the OP have the same filter quality and settings as a baseline. I'm also not interested in losing quality to increase performance for still images, it would be worth doing for video as they can take hours to render.

I know the 2970WX isn't the best single thread performing CPU, though I've found overclocking it (under water) it doesn't really seem to affect the rendering times at all. 

As an observation on how this program works, when you overclock the CPU only the FPS increases slightly when you are in build mode, while looking at individual threads in task manager you can see its acting similar to how a game performs where it maxes out a single thread in build mode so this makes sense that the CPU is the bottleneck in this mode. 

However when you are rending something, it does not max out any of the CPU cores and overclocking the CPU has no effect at all, so I think we are GPU limited during renders.

So yeah I think I would get a nice FPS boost for working in the model if I eventually upgrade to a 3rd Gen Ryzen for example, but it would not improve the rendering times.

Therefore I believe it to be the raw performance of the card and the driver that's holding it back during renders.

I am going to be watercooling these cards and squeeze out a bit more performance, but its not going to be much of an uplift.

Anyway i'll follow up with requested benchmarks.


There a few Users here at AMD that professionally render also. Maybe one those Users can help. Though they use different Rendering software like Maya or Adobe etc.

No need to upload the benchmarks. I just mentioned it because that is what Lumion 10 required as a recommended GPU card.

Thanks for the detail answer it may help someone else to be able to help you increase performance using the Radeon VII GPU and Ryzen Processor while rendering.

Here are a few Users that use Rendering software that have opened thread at AMD processors as an example: