As a relative novice, I am here because I am grappling with working out a rendering pipeline. There is discussion related to this topic here: https://www.crowd-render.com/forum-1/future-features/apply-denoising-only-on-one-single-device?origin=auto_suggest
I have discovered that you must not use the denoise render option in render options, but add the open image denoiser in the compositor. There are two reasons, the compositor method is much faster probably because the rendering process fully utilises your CPU, and because the optix denoiser is much less friendly to animations. But neither are demonising temporally - temporal denoisers work over a number of frames (usually 7) so you don't get artefacts when rendering animation.
Also, if using crowd render, each denoiser works independently and has to make a guess about pixels either side. Therefore, denoising with crowd render will lead to a visible line for the tiles for each computer in the whole image.
So I am looking at rendering to openexr multilayer using CR without denoising, then passing the exr through the compositor for denoising and splitting to render layers to use in my VFX software, where I can use either neat temporal denoiser or the hitfilm alternative which works well and some say works as well as neat.
So one of my questions is, what sample resolution do people typically use in animation? I realise the answer will usually be "depends" but there must be some kind of range or situational typical values. There must be a balance between reasonable render times and quality.
I am finding that the only way to get something usable is to continually up the samples. The stuff I am trying to do is often very dark and with volumetrics. I am trying to find ways to "cheat". For example, splitting a scene into layers and rendering them separately so that the background volumetrics can be dealt with separately - the foreground brighter images seem to work well with the denoisers, allowing lower sample rates and thus faster rendering.
I've also discovered that you can combine render engines. So in one render, I can set up a scene using part of it in cycles and part of it in Eevee, then composite them later in the VFX software. How would this work in CR? I set up a scene in one render engine and a scene in another, and when I hit render will they all be distributed and I can go to bed?
You can bet I've trawled the interwebs...but it's so frustratingly vague. The best so far is the Blender Guru video on denoising - but it still is much more focussed on stills.
How are others approaching this?
Hi James,
What I meant was, if you use the denoiser in the render pane, and not in the compositor, you will get different results from each render node, meaning you will see a visible line in the CR tiles. It's important to denoise as a last step in the compositor. It's also faster.
Also, with denoising it is tempting to shoot for lower sample settings. I am aware that different contexts will allow for different sample settings, but over time I am starting to get a slight feel for what will be needed where. But I am painfully going through trial and error and everyone is so damned coy about fleshing out the ball park rates they have come to, or ways to approach it. Do people split the scenes into background and foreground and composite later routinely for example?
So another example, the lower the amount of light in the scene, the more noise. Volumetrics also increase noise because it is scattering light unevenly. The more light bounces the slower the render, but it my not always be necessary to have light bounces, allowing for faster render and higher sample counts. Do you go for higher volume light bounces and then reduce the effect in compositing?
There must be some relation between the amount of light in a scene and sample setting in order to make for successful temporal denoising later.
The Optix and Intel deniers are similar and the intel seems a bit better at animation, but should we avoid using them altogether? I am leaving them on and including a denoising pass in the EXR. The reason I mentioned going through the step of bring the multilayer EXR into Blender to split up (and or denoising) is because my VFX software does not handle multilayer EXR - only single layer.
I am finding file output while rendering to be unreliable, but exporting the layers from the multilayer to work fine.
Putting numbers on things, for medium-low light, between 700 to 1200 samples looks to be enough for a temporal denoiser to handle. Medium to high light, you could maybe get away with 500 samples. Anything lower than that might be possible but it will be tricky.
It looks to me that the more movement in the scene the higher samples required. So for medium movement add 100 to 200 samples. For greater movement add maybe 200 to 300. I could be wrong about that...just trying to put some numbers on things.
Hi Rohan,
Ok, as far as samples go, it is something that needs tuning for each scene, so it is a 'depends' answer. Though I can speak to the difficulty of using denoising. In the latest build of Crowdrender (V0.3.0 at the time of writing) in fact back to V0.2.10, all passes are available in the compositor automatically, there is no need to render out to EXR first and then re-import those frames. You can see this in the video linked to below where we tested this process and you can see all passes are automatically populated in your composite node tree/network.
This means that denoising data will be available for the compositing step which is automatically applied in Blender, meaning you won't have to do an additional step like you might have been suggesting (sorry if I actually misunderstood you though).
As for multiple scenes or layers, also got you covered, I just so happened to have done an experiment today that confirms this is possible, its a basic experiment, but it is a proof of concept nonetheless :)
As always, I hope this has helped, please let me know if I can improve on this :)