Extra Grainy Volumetrics in 4.9.3.166

algovincianalgovincian Posts: 2,636
edited January 2017 in Daz Studio Discussion

Been playing around with a Lekkulion dual wielding a pair of lightsabers in 4.9.3.166. Let this render cook for 15,000 iterations @ 2400x3120 pixels, and expected all grain to disappear (especially when downsampled to 800 pixels wide), but it didn't:

There were fireflies still present, too (which were removed via script). Has something changed under the hood? I've done similar scenes with volumetrics in the past, and was able to get the grain to go bye-bye completely after far fewer iterations:

Same original size 32-bit canvas output from DS, scripted HDR procesing and downsampling on both images. I'd rather not re-install the older beta I was using at the time to test it, but the difference seems pronounced.

Anybody else notice this?

- Greg

lek-10-800.png
800 x 1040 - 982K
Post edited by algovincian on

Comments

  • Oso3DOso3D Posts: 15,042

    Were you using Spectral Rendering?

     

  • MattymanxMattymanx Posts: 6,950

    What was the convergence % when you quit the render?  Higher convergence equals less noise.

  • ToborTobor Posts: 2,300

    Did you last try in 4.8? As you may know, there was a significant change in the volumetric functions in the base engine between 4.8 and 4.9, along with, I believe, associated changes in the uber MDL script.

    If this is between 4.9 versions, you can check the change notices posted with each update. 

    I assume your previous try also ran for 15K samples, and it was better? It useful to note both iterations and convergence, since they don't necessarily run in sync.

    On at least the lightsaber image, what happens if you run a high frequency blur filter through it? Does it lose too much detail?

  • algovincianalgovincian Posts: 2,636

    Haven't played with spectral at all yet.

    Convergence % was turned off, as usual.

    The carnival scene was rendered using the first beta that supported pascal cards (can't remember the version atm). It was stopped at around 4000 iterations. I've played with filtering and masking out the areas where there is structure, but would prefer not to.

    I believe there were some posts about grain with regards to the beta, but as I recall, results were mixed. Will try to dig them up once I'm off this iPad and on a real computer.

    Thanks for all of your responses.

    - Greg

  • algovincianalgovincian Posts: 2,636

    It also just occurred to me that there is no geometry behind the figures in this scene. I've seen this be an issue under other circumstances, though as I look on this device, there seems to be no difference in the grain in front of the lekkullion's left leg. Will look closer when I can on the full resolution version.

    - Greg

  • algovincianalgovincian Posts: 2,636

    Also did not use DOF in the new image, while it was used in the carnival scene. This may be the difference.

    - Greg

  • ToborTobor Posts: 2,300

    Convergence % was turned off, as usual.

    While you can shunt the stop-at values so you effectively have an endless render, what's helpful is to compare iterations to the convergence achieved at that point. This helps tell you scene efficiency. In your tests write down both. You ideally want to reach optimal convergence in the least number of iterations -- that's basically how to estimate efficiency. (The reason: each iteration is a pass of all pixel samples, and convergence is an internal estimate of how many pixels have reached the designated threshold -- talking about the Render Quality setting here, not Convergence Ratio. Threshold convergence in fewer iterations means those pixels satisfied the algorithms faster, ergo, better efficiency.)

    It's also useful to monitor if the scene is visually unacceptable, yet Iray has reached a reasonable convergence stop-at value. This means something inside Iray is lying about the convergence estimate. (In my experience, this can be caused by things like textures that introduce a lot of high frequency noise. The metal flake shaders can be a problem here.)

    The carnival scene is going to be a hard one for Iray in any case, as you have pushed the blacks way up. Incomplete convergence is most noticeable in areas of darker, but not black, shadow.

    Bear in mind that scaling down the render to hide the grain is often worse, in loss of detail, than running a highpass filter over it. You lose 3/4 of the pixels by reducing an image by 50% -- e.g. a 1000px image has 1M pixels; but a 500px image has only 250K. At least with the highpass, especially if you write it yourself, is that you can limit the block size to immediate neighbors only, specific colors, etc. 

  • RuphussRuphuss Posts: 2,631

    what nice compositions algovincian

     

  • algovincianalgovincian Posts: 2,636
    Ruphuss said:

    what nice compositions algovincian

    Thanks, Ruphuss - glad you enjoy them. Now, if I could only get the volumetrics to be buttery smooth!

     

    Tobor said:

    Convergence % was turned off, as usual.

    While you can shunt the stop-at values so you effectively have an endless render, what's helpful is to compare iterations to the convergence achieved at that point. This helps tell you scene efficiency. In your tests write down both. You ideally want to reach optimal convergence in the least number of iterations -- that's basically how to estimate efficiency. (The reason: each iteration is a pass of all pixel samples, and convergence is an internal estimate of how many pixels have reached the designated threshold -- talking about the Render Quality setting here, not Convergence Ratio. Threshold convergence in fewer iterations means those pixels satisfied the algorithms faster, ergo, better efficiency.)

    It's also useful to monitor if the scene is visually unacceptable, yet Iray has reached a reasonable convergence stop-at value. This means something inside Iray is lying about the convergence estimate. (In my experience, this can be caused by things like textures that introduce a lot of high frequency noise. The metal flake shaders can be a problem here.)

    The carnival scene is going to be a hard one for Iray in any case, as you have pushed the blacks way up. Incomplete convergence is most noticeable in areas of darker, but not black, shadow.

    Bear in mind that scaling down the render to hide the grain is often worse, in loss of detail, than running a highpass filter over it. You lose 3/4 of the pixels by reducing an image by 50% -- e.g. a 1000px image has 1M pixels; but a 500px image has only 250K. At least with the highpass, especially if you write it yourself, is that you can limit the block size to immediate neighbors only, specific colors, etc. 

    In my experience, the convergence % (however it may be calculated - not documented AFAIK) isn't all that useful. I've had scenes that look half cooked yet are supposedly "converged", and also I've had scenes with a low convergence % that look done. Ultimately, assuming there are no memory limits, visual appearance vs. time is all that matters to me.

    In my own testing, rendering out a higher resolution and downsampling has consistently produced a noticeably sharper image. It has the added benefit of producing much better prints as well.

    Your comment about the carnival scene being a tough one for Iray is interesting, as the Lekkulion image rendered longer and the volumetrics were visually grainier. After some further investigation last night, my conclusion is that it all comes down to shadows (particularly the lighting necessary to create sharp shadows).

    There are no god-ray type shadows in the carnival image due to the distribution of lights. The rays in the lightsaber image are caused by a single spotlight pointed at the camera, but located further back behind the Lekkulion's head. I believe using an array of lights rather than a single point might help with the grain, but I also think it may have the unwanted side effect of making the rays less pronounced.

    It should be noted that both scenes were rendered out as 32-bit EXR, with subsequent HDR processing done via script to achieve the 8-bit images. The HDR processing really makes the images pop IMHO, but it also amplifies *any* grain that may be present.

    Thanks again for taking the time to respond, Tobor. I enjoy these more technical conversations.

    - Greg

Sign In or Register to comment.