Adventures in Iray Documentation

margravemargrave Posts: 1,822
edited July 2021 in Daz Studio Discussion

I just finished reading through the Iray documentation. This post compiles the interesting Daz-related tidbits I came across.


 

3.6 - Instancing5.3.6 Instancing (nvidia.com)

As I explained in another thread, activating "Instance Optimization" will automatically shift your scene to world origin, preventing the dreaded black eyes bug.

 


 

3.10 - Denoising (nvidia.com)

Explains the denoiser settings. I personally never use it, because I think it looks fugly, but if you're wondering what the settings mean and want some useful caveats--like not to mix the denoiser with bloom--have a look.

 


 

3.11 - Deep-learning-based SSIM predictor (nvidia.com)

Explains the Post SSIM parameter in "Progressive Rendering". I still don't really get it, but it appears to have been trained (like an upscaling program) to detect when your render is "done".

4.6.2 - Convergence quality estimate (nvidia.com)

Apparently, when you enable SSIM, the other rendering quality sliders are disabled completely. Daz gives no visual cue about this.

 


 

4.8 - Caustic sampler (nvidia.com)

Caveats for using the Caustic Sampler. Apparently, it only benefits "turntable" scenes, where the glass object is set on a pedestal and is the focus of the scene. It does nothing if your scene just has some glass bits somewhere in it.

 


 

4.11 - Rendering options5.6 Rendering options6.2 Rendering options (nvidia.com)

Iray Photoreal natively supports FILTER_BOXFILTER_TRIANGLE and FILTER_GAUSS filters for antialiasing. If FILTER_CMITCHELL or FILTER_CLANCZOS is requested, Iray Photoreal will use the default box filter instead.

Apparently, the Mitchell pixel filter--which people say gives the best results--does nothing.

Also, when using the "Mitchell" filter, the preferred value is 0.5

When the nominal luminance value is set to 0, Iray Photoreal will estimate the nominal luminance value from the tonemapper settings. If a user application applies its own tonemapping without using the built-in tonemappers, it is strongly advised to provide a nominal luminance.

Basically, it seems like the Firefly Filter's "Nominal Luminance" can be left at 0 if you have the tonemapper enabled.

For now, this can only reduce memory usage on pre-Turing generation GPUs and the CPU (while potentially harming rendering performance) if "on" is used.

"Ray Tracing Low Memory" doesn't do anything unless your GPU is older than the RTX 1600 series.

Allows to override the per-object boolean attribute "shadow_terminator_offset" globally for all scene objects.

Apparently Iray has a shadow terminator (which smooths out blocky shadows on low-poly objects), but Daz doesn't use it?

 


 

13.1 - Tonemapping (nvidia.com)

Nvidia's recommendation for "Burn Highlights" is 0.5, whereas Daz defaults to 0.25.

 


 

14 - Physically plausible scene setup (nvidia.com)

General tips for photorealism.

 


 

18.2 - Render target canvases (nvidia.com)

Breakdown of the different canvases and what they do. If you've ever wondered what the difference between Depth and Distance is, here you go.

Post edited by margrave on

Comments

  • jbowlerjbowler Posts: 798

    One thing new in 4.15 is that it is now possible to have tonemapping on while using canvases; previously the canvas data would be tone-mapped as well which made it pretty much useless.  The result of this is that the convergence stuff, including SSIM, has become useful to me and that the issue with fireflies is apparently solved.  It is necessary to get the tonemapping EV value correct, or rather approximately correct.  Indeed the EV value fundamentally controls the render time; if it is too low (making the tone-mapped result too bright, washed out) convergence takes many more iterations, if it is too high (dark result) convergence happens too fast leaving fireflies and noise which is clearly visible when the exposure is correct in the canvas.

    The convergence behavior also seems to have flipped with 4.15 or maybe 4.14 - previously the recommendation was to not leave dark areas in the scene as they would not converge.  Now the opposite seems to be true; it's over-exposed areas that don't converge.  I suspect the former is just a bug fix and the latter is because the canvas is no longer tone-mapped so it contains light values above 1.0 - tone-mapping limits the value of a pixel to 1.0, so when a sample gets significantly above that value the corresponding pixel won't change.

  • margravemargrave Posts: 1,822

    I tried the SSIM. It stopped after like 200 iterations and looked terrible, lol.

  • jbowlerjbowler Posts: 798

    margrave said:

    I tried the SSIM. It stopped after like 200 iterations and looked terrible, lol.

    I've seen issues with strand based hair, which takes forever to converge completely.  I suspect this is caused by all the reflections but getting rid of them on a 4k render can take me a 24hour+ render (on a Titan XP.)  There's probably an easy way to fix this, but I don't know it...

  • j cadej cade Posts: 2,310

    jbowler said:

    margrave said:

    I tried the SSIM. It stopped after like 200 iterations and looked terrible, lol.

    I've seen issues with strand based hair, which takes forever to converge completely.  I suspect this is caused by all the reflections but getting rid of them on a 4k render can take me a 24hour+ render (on a Titan XP.)  There's probably an easy way to fix this, but I don't know it...

    ? I've always fouund strand hairs to clear up muuuuuuuuuuuuuuuuch faster than mesh hair as in a quarter of the  time faster

  • nonesuch00nonesuch00 Posts: 18,274

    Well you got me trying SSIM now to see if my renders go faster on an nVidia GTX 1650 4GB. I've already been using the Mitchell filter but set at 1.0 as I thought this was equivalent to no filter but apparently I'm wrong and it needs to be set at 0.5.

  • This part is also interesting. Always wondered how it interacts with Daz node instance.

    3.6Instancing

    Iray supports two ways of handling scene geometry: flattening and instancing. Several modes are available to control how and which geometry is instanced. The choice of mode is controlled by the following attribute on the IOptions class:

    mi::IString iray_instancing

    Controls the instancing mode in Iray, which can be one of the following values, where "off" is the default:

    "off"

    If instancing is disabled (the default), all scene geometry is flattened into a single memory block. Geometry that is instanced multiple times in the scene will be duplicated. This mode almost always leads to higher memory use than using instancing "on", but often yields higher rendering performance.

    "on"

    If instancing is enabled, all instanced geometry will only exist once, so input scene instances will also be instances in the rendering core. This may yield a significantly lower memory footprint, but may incur significant runtime costs, especially if the geometric extent of the instances overlap. Iray will also apply incremental object transform updates when instancing is enabled. This mode significantly reduces the scene preprocessing runtime overhead when moving objects around dynamically.

    "user"

    Without further intervention, this mode behaves like the "off" mode. This mode allows for fine-grained control over what is instanced and what is flattened. Scene elements like objects, groups, and instances can be tagged for instancing, as explained in the following section. Iray will also apply incremental instance transform updates when user instancing is enabled. This mode significantly reduces the scene preprocessing runtime overhead when moving around (flattened) instances.

    "auto"

    If instancing is set to auto mode, Iray will automatically detect and decide which objects to instance, in order to reduce the memory footprint and speed up object transform updates. Input scene instances will usually be all instanced in the rendering core, unless there is a significant downside for memory or performance. This mode may significantly reduce the scene preprocessing runtime overhead when repeatedly changing the transformation of a group or (flattened) instance. In addition, this mode responds to the same controls as the user mode.

    Also this might be useful for devices with low amount of vram.

    5.8Global performance settings

    The following parameters are shared by all render contexts which use Iray Interactive and are set with the IRendering_configuration::set_renderer_option method.

    mi::Float32 irt_working_memory = "0.9"

    This setting limits the amount of working device memory available to Iray Interactive as a fraction of the total device memory. The working device memory is used internally for temporary storage. It does not include memory required for scene and frame buffer storage. The setting is only used as an upper bound. Iray will allocate as much free device memory as possible up to the amount specified. By default the amount is set to 90%. Iray performance benefits from increased working memory so his option should be set as high as possible.

  • nonesuch00nonesuch00 Posts: 18,274

    nonesuch00 said:

    Well you got me trying SSIM now to see if my renders go faster on an nVidia GTX 1650 4GB. I've already been using the Mitchell filter but set at 1.0 as I thought this was equivalent to no filter but apparently I'm wrong and it needs to be set at 0.5.

    I have tried "auto-detect" convergence and it's slower, after 14.5 hours at only 603 iterations. I'll let if continue to run maybe it will ultimately finish faster.

    The other thing I did was changes the Burn Highlights from 0.25 to 0.5 as you said nVidia iRay documentation recommends and I've not compared the two renders side by side yet but I think I like the nVidia recommendation of 0.5 better than the DAZ default of 0.25.

Sign In or Register to comment.