See attached image: GroundClay - Viewport and GroundClay - Render for the difference.
Also attached is the original Blend file packed with the images and node setup.
Made on macOS Blender 2.79b with RPR 1.6.159.
Expected result: the Production Render should be as "displaced" as the Viewport render is. The difference is very apparent from the screenshots.
Rendering was done on 1x eGPU only.
MacBook Pro 2017.
So I need to investigate more but this MIGHT be expected. Or rather not a subdivision bug. Let me explain:
First, watch this: Video Link : 1241
Adaptive subdivision works by calculating the subdivision level based on the camera. An adaptive subdiv level of "1" means to subdivide to one face per pixel, .5 is subdivide to each face is half a pixel, and 2 means to subdivide so each face takes up two pixels. (Realistically 1 makes the most sense to maintain smoothness in your render).
Obviously it needs to use the camera info to calculate this. And the resolution of the render matters here. But, you'll notice that the camera and resolution of the viewport render might be different that the final frame render! The viewport render "camera" has to cover all the area outside the border even if that is being cropped for the final render. Moreover the resolution of the area inside the border there is more than your final resolution of 1280x720. On my macook pro, the resolution of the viewport (inside the border) is 1687x812. So with the resolution being higher and camera different than the final render you might expect a different adaptive subdivision level. However, if that resolution was exactly the same as well as the camera, you might expect the same subdivision.
Now, a possible question to ask here is should the resolution for the interactive viewport be limited to the final render at most? That would optimize for speed a bit. Let me know if this is not clear, and like I said, I'd recommend Adaptive subdiv level of "1" in most cases.
This makes sense from a technical point of view in that this is how the active subdivision is designed to work. I can completely understand that.
From an artistic point of view the way it works is not very helpful IMO. I'm trying to get a consistent end result. I like my preview to reflect what I'm going to get in the final render. Especially if I'm going to leave a render running for a long time. It makes little sense to me to see that the preview looked great and then have to scratch your head why the final render after what could be an hour of rendering looks worse than the preview did.
When I set it to Adaptive subdiv = 1 my Blender either crashes or takes a few minutes of 100% hanging before it will begin a render. The entire scene gets unstable and Blender gives up.
When I set it to the older Subdivision Levels it's even more unreliable and crashes or will even display extrenely distorted results. Restarting the viewport render will often crash or display differing results.
I'm sure we'll get there in the end because the result I managed to get in the viewport was utterly fantastic. If there was a more consistent and reliable way to set up the displacement materials and then obtain a final render it would be brilliant. One of the best displacement results available in any renderer.