3Delight Laboratory Thread: tips, questions, experiments

17980828485100

Comments

  • wowiewowie Posts: 2,029

    Sorry, I've been busy tweaking and debugging the shaders. Plus some real life stuff.

    I went ahead and add support for path tracing area lights. Essentially, if you enable reflection and global illumination, you'll get direct lighting for free with those lights. On comparable settings, path tracing area lights are actually faster than old school area lights or even point/spot lights. Can be up to 3 times faster. It was pretty hard implementing that, but the speed up was worth the trouble.

    With that implemented, I had to make sure path traced area lights work with SSS. Took a lot of time to work out problems I'm seeing, but that's fixed now. In the process, I reworked how SSS was integrated.

    Some interesting things in my latest tests - it seems like I was right and wrong about progressive rendering. Progressive rendering (or progressive refinement) is faster, but more prone to noise. Originally, I thought this was due to using the 1x1 box filter, but that isn't the case. So for final renders, it's best to disable progressive rendering if you wan minimal noise.

    Another behavior I haven't expected is that as you raise pixel samples, the render times difference between progressive rendering and non progressive (but still utilizing the path tracer) gets smaller and smaller. Now I understand why 3delight devs recommends not using progressive refinement and 16x16 pixel samples for final renders.

    Some redundant switches have been removed from the shader. Mostly to make it less confusing and less prone to misuse (or abuse). I've also implemented a LOD scheme for bump, mostly because I was sick of having to tweak bump settings depending on how far/near the object to the camera when rendered. Bump also influences roughness, though not like Pixar's Bump to Roughness. Currently I'm trying to add a switch to enable/disable bump for the coat layer, so you can have bump affect the base and coat, or just the base.

  • kyoto kidkyoto kid Posts: 41,244

    ...yeah I've taken to using progressive mode only for tests.

  • Sven DullahSven Dullah Posts: 7,621
    wowie said:

    Sorry, I've been busy tweaking and debugging the shaders. Plus some real life stuff.

    I went ahead and add support for path tracing area lights. Essentially, if you enable reflection and global illumination, you'll get direct lighting for free with those lights. On comparable settings, path tracing area lights are actually faster than old school area lights or even point/spot lights. Can be up to 3 times faster. It was pretty hard implementing that, but the speed up was worth the trouble.

    Wow, that's a significant speed boost, coool!

    wowie said:

    With that implemented, I had to make sure path traced area lights work with SSS. Took a lot of time to work out problems I'm seeing, but that's fixed now. In the process, I reworked how SSS was integrated.

    Some interesting things in my latest tests - it seems like I was right and wrong about progressive rendering. Progressive rendering (or progressive refinement) is faster, but more prone to noise. Originally, I thought this was due to using the 1x1 box filter, but that isn't the case. So for final renders, it's best to disable progressive rendering if you wan minimal noise.

    Interesting, I also assumed it was the filter. Not that it matters, but do you know why it works that way? It 's something to do with optimizing raytracing?

    wowie said:

    Another behavior I haven't expected is that as you raise pixel samples, the render times difference between progressive rendering and non progressive (but still utilizing the path tracer) gets smaller and smaller. Now I understand why 3delight devs recommends not using progressive refinement and 16x16 pixel samples for final renders.

    Some redundant switches have been removed from the shader. Mostly to make it less confusing and less prone to misuse (or abuse). I've also implemented a LOD scheme for bump, mostly because I was sick of having to tweak bump settings depending on how far/near the object to the camera when rendered. Bump also influences roughness, though not like Pixar's Bump to Roughness. Currently I'm trying to add a switch to enable/disable bump for the coat layer, so you can have bump affect the base and coat, or just the base.

    Thanks Wowie, great newssmiley

  • I finally had one of the new fresh beef - cooked on order quarter pounders with cheese at McDonald's, and they are great, and now this news from wowie! Things are looking up!!

  • wowiewowie Posts: 2,029
    edited April 2018

    Wow, that's a significant speed boost, coool!

    The catch is for comparable quality. Basically, when you have either spot, point or distant light set to use 256 shadow samples. Area lights can 'make do' with just 128 samples.

    Interesting, I also assumed it was the filter. Not that it matters, but do you know why it works that way? It 's something to do with optimizing raytracing?

    The 3delight devs never went into lengths explaining what goes on with progressive refinement. I think they use that mode mainly for IPR and kept non progressive for final renders.

    Found an 'unorthodox' way of optimizing render times for GI, plus getting rid of more noise. Basically, global illumination is done twice, one by the shader and another by a GI light. Here are low res JPEG comparison shots at 8x8 pixel samples. Distant light is set to 0 softness, so I kept the shadow samples at 4. This is inside an environment sphere with a HDRI.

    Progressive - Time: 3 minutes 13.52 seconds

    Non progressive with 2x2 gaussian filter - Time: 4 minutes 27.93 seconds

    The non progressive is just a tad better, though it is slower by more than a minute. I find using 16x16 pixel samples should get rid of most global illumination noise, while at the same time gives you a very nice anti aliased filtering. If you're not sensitive to noise, values between 8 to 16 will work. I think mustakettu uses 10.

    The non progressive, 16x16 pixel samples with 2x2 gaussian filter - Time : 8 minutes 54.54 seconds

    progressive.jpg
    462 x 600 - 154K
    non progressive.jpg
    462 x 600 - 139K
    16x16.jpg
    462 x 600 - 131K
    Post edited by wowie on
  • Oso3DOso3D Posts: 15,045

    I hope you will include a number of render setting presets.

    One of my huge frustrations with 3dl that drove me to focus more on Iray was that there are so many weird interactions and special rules and etc etc etc (like talked about all through this thread!) that it's overwhelming for a user. You get weird results or slow results that you don't even have the information or clue to have been looking out for solutions.

    I mean, I had no clue that progressive mode was a bad thing. But then apparently I was stuck between progressive with raytracing or straight without (in most cases), unless I was recoding a bunch of my own stuff...

     

     

  • Sven DullahSven Dullah Posts: 7,621
    wowie said:

    Wow, that's a significant speed boost, coool!

    The catch is for comparable quality. Basically, when you have either spot, point or distant light set to use 256 shadow samples. Area lights can 'make do' with just 128 samples.

    Interesting, I also assumed it was the filter. Not that it matters, but do you know why it works that way? It 's something to do with optimizing raytracing?

    The 3delight devs never went into lengths explaining what goes on with progressive refinement. I think they use that mode mainly for IPR and kept non progressive for final renders.

    Found an 'unorthodox' way of optimizing render times for GI, plus getting rid of more noise. Basically, global illumination is done twice, one by the shader and another by a GI light. Here are low res JPEG comparison shots at 8x8 pixel samples. Distant light is set to 0 softness, so I kept the shadow samples at 4. This is inside an environment sphere with a HDRI.

    Progressive - Time: 3 minutes 13.52 seconds

     

    Non progressive with 2x2 gaussian filter - Time: 4 minutes 27.93 seconds

     

    The non progressive is just a tad better, though it is slower by more than a minute. I find using 16x16 pixel samples should get rid of most global illumination noise, while at the same time gives you a very nice anti aliased filtering. If you're not sensitive to noise, values between 8 to 16 will work. I think mustakettu uses 10.

    The non progressive, 16x16 pixel samples with 2x2 gaussian filter - Time : 8 minutes 54.54 seconds

     

    Nice! What was the shading rate at in those non progressive ones? Possible to maybe speed up rendering a bit compared to progressive mode, without losing too much detail, by increasing the shadow rate a tad?

  • Sven DullahSven Dullah Posts: 7,621
    Oso3D said:

    I hope you will include a number of render setting presets.

    One of my huge frustrations with 3dl that drove me to focus more on Iray was that there are so many weird interactions and special rules and etc etc etc (like talked about all through this thread!) that it's overwhelming for a user. You get weird results or slow results that you don't even have the information or clue to have been looking out for solutions.

    I mean, I had no clue that progressive mode was a bad thing. But then apparently I was stuck between progressive with raytracing or straight without (in most cases), unless I was recoding a bunch of my own stuff...

     

     

    Oh I don't think it's a bad thing:) I've used it for final renders many times. It just doesn't work for all kinds of scenes. And shadows and reflections (well everything that requires raytracing) look slightly different than in non progressive mode. When speed matters I like to use it, one can always clean up the noise in Gimp or PS if needed. (Although I hate to have to do a lot of postwork in general)

  • Oso3DOso3D Posts: 15,045

    See, that's the thing; I feel like I need a 1980s-era Flight simulator pullout to figure out what I need to do when with 3DL... ;)

     

  • kyoto kidkyoto kid Posts: 41,244

    ...I feel I need a deskside supercomputer to render large scale highly detailed high quality photo real scenes in Iray in less than 12 hours to a day or more.  Haven't won the lotto yet (next draw tomorrow night).

  • Oso3DOso3D Posts: 15,045

    Yeah, but in my case, it's true. ;)

     

  • kyoto kidkyoto kid Posts: 41,244

    ..same for myself.

  • Sven DullahSven Dullah Posts: 7,621
    Oso3D said:

    See, that's the thing; I feel like I need a 1980s-era Flight simulator pullout to figure out what I need to do when with 3DL... ;)

     

    The AweShader interface sure looks like itlaugh

  • wowiewowie Posts: 2,029
    edited April 2018
    Oso3D said:

    I hope you will include a number of render setting presets.

    It will be included in the commercial pack.

    Nice! What was the shading rate at in those non progressive ones? Possible to maybe speed up rendering a bit compared to progressive mode, without losing too much detail, by increasing the shadow rate a tad?

    There's no shading rate. wink By that I mean, this is still using the path tracer, not REYES mode of 3delight. With the path tracer, shading rate is not used or ignored. That's one of the perks of using mustakettu's script to interface with the 3delight renderer.

    Based on experience, using the path tracer roughly corresponds to using a shading rate of 0.2, even if the shading rate value was set to lower or higher in the renderer options or shaders (like omnifreaker's UE2).

    The shadow is already very detailed if you use path tracing area lights. By the default, it uses 128 samples, which may look like a lot of samples, but there's really no speed up if you use less.

    If you use old school delta/point/distant light or area lights similar to omnifreaker's UberArea light, then it will vary depending on how much shadow softness you want. Softer shadows require higher samples. I probably should look into making new ones with adaptive sampling. Mostly because those light shaders determine sampling value for themselves.

    For denoising, you might want to look into ImageMagick. You can basically use a batch script to process lots of images at once. Plus, it has support for median denoising. Basically, you combine the same image rendered with different noise levels / seed, make educated guesses from them to produce less noise.

    The technique is explained quite simply here.

    If you have Photoshop, that is already in your arsenal.

    For animation, there's a lot of denoise filters out there, depending on your app of choice (Nuke, Fusion, etc). There's also NeatImage, which is pretty good.

    Speculating wildly here, it would be nice to have AOV output (LPE for those more familiar with iray) with DS and 3delight. Plus a nice, easy way of doing imager shaders. Some possibilities if that's exposed is (maybe) denoising, tone mapping, white balance correction or something like the multilight feature already found in 3delight's Katana implementation. Would also love to have the ability to pick a spot for white balance or even to set camera focus to alter DOF.

    Post edited by wowie on
  • Sven DullahSven Dullah Posts: 7,621
    wowie said:
    Oso3D said:

    I hope you will include a number of render setting presets.

    It will be included in the commercial pack.

    Nice! What was the shading rate at in those non progressive ones? Possible to maybe speed up rendering a bit compared to progressive mode, without losing too much detail, by increasing the shadow rate a tad?

    There's no shading rate. wink By that I mean, this is still using the path tracer, not REYES mode of 3delight. With the path tracer, shading rate is not used or ignored. That's one of the perks of using mustakettu's script to interface with the 3delight renderer.

    Aah, of coursesurpriselaugh It's slowly dawning on me;)

    wowie said:

    Based on experience, using the path tracer roughly corresponds to using a shading rate of 0.2, even if the shading rate value was set to lower or higher in the renderer options or shaders (like omnifreaker's UE2).

    The shadow is already very detailed if you use path tracing area lights. By the default, it uses 128 samples, which may look like a lot of samples, but there's really no speed up if you use less.

    If you use old school delta/point/distant light or area lights similar to omnifreaker's UberArea light, then it will vary depending on how much shadow softness you want. Softer shadows require higher samples. I probably should look into making new ones with adaptive sampling. Mostly because those light shaders determine sampling value for themselves.

    For denoising, you might want to look into ImageMagick. You can basically use a batch script to process lots of images at once. Plus, it has support for median denoising. Basically, you combine the same image rendered with different noise levels / seed, make educated guesses from them to produce less noise.

    The technique is explained quite simply here.

    If you have Photoshop, that is already in your arsenal.

    For animation, there's a lot of denoise filters out there, depending on your app of choice (Nuke, Fusion, etc). There's also NeatImage, which is pretty good.

    Tks for the tips!

    wowie said:

    Speculating wildly here, it would be nice to have AOV output (LPE for those more familiar with iray) with DS and 3delight. Plus a nice, easy way of doing imager shaders. Some possibilities if that's exposed is (maybe) denoising, tone mapping, white balance correction or something like the multilight feature already found in 3delight's Katana implementation. Would also love to have the ability to pick a spot for white balance or even to set camera focus to alter DOF.

     

  • wowiewowie Posts: 2,029

    Looks like I misread Will's post.

    I was thinking about shading presets, not render presets. The render setting presets will be made for mustakettu's render script. Two presets - a draft/preview preset, which is basically the same as DS default parameters and a final quality preset.

    These will be selectable via the Render Script dropdown. The difference between the two are progressive mode and pixel samples. Both using the same script, but defaults to different values. Obviously, you can choose either one and change the values manually.

    The script will be included with the basic shader. I've already asked mustakettu and she said she's OK with me bundling her render script.

    The shader uses under the hood trickery to optimize sampling for quality AND performance. No shading rate/samples parameters outside of two specific SSS parameters - SSS samples (the default values is already pretty good in terms of quality and performance) and SSS ray weight (will need to be adjusted depending on pixel samples set in the renderer settings). If you use path traced area lights, you won't have sample settings on your lights either.

    I had thought about exposing some of the parameters, buth through testing I find the performance differences are negligible (if any). These are settings like Russian roulette threshold, primary (direct light) and secondary (indirect light) ray samples and ray weights.

    Simply put, with the shader and the script, you only have to play around with pixel samples to choose between quality and performance. There are max diffuse and specular depth dials, but I find only the max diffuse depth have a performance penalty (beyond 4 bounces). For defaults, I've set max diffuse depth to 4 and max specular depth to 16, both in the shader and the render settings of the render script.

    The GI light will possibly include a samples parameter, just in case you need more samples.

  • Those should work! I can't wait!

  • Sven DullahSven Dullah Posts: 7,621
    wowie said:

    Looks like I misread Will's post.

    I was thinking about shading presets, not render presets. The render setting presets will be made for mustakettu's render script. Two presets - a draft/preview preset, which is basically the same as DS default parameters and a final quality preset.

    These will be selectable via the Render Script dropdown. The difference between the two are progressive mode and pixel samples. Both using the same script, but defaults to different values. Obviously, you can choose either one and change the values manually.

    The script will be included with the basic shader. I've already asked mustakettu and she said she's OK with me bundling her render script.

    The shader uses under the hood trickery to optimize sampling for quality AND performance. No shading rate/samples parameters outside of two specific SSS parameters - SSS samples (the default values is already pretty good in terms of quality and performance) and SSS ray weight (will need to be adjusted depending on pixel samples set in the renderer settings). If you use path traced area lights, you won't have sample settings on your lights either.

    I had thought about exposing some of the parameters, buth through testing I find the performance differences are negligible (if any). These are settings like Russian roulette threshold, primary (direct light) and secondary (indirect light) ray samples and ray weights.

    Simply put, with the shader and the script, you only have to play around with pixel samples to choose between quality and performance. There are max diffuse and specular depth dials, but I find only the max diffuse depth have a performance penalty (beyond 4 bounces). For defaults, I've set max diffuse depth to 4 and max specular depth to 16, both in the shader and the render settings of the render script.

    The GI light will possibly include a samples parameter, just in case you need more samples.

    Seems pretty straight forward:) Tks wowie!

  • wowiewowie Posts: 2,029
    edited May 2018

    Another update.

    I think I've finally worked out how to integrate SSS into the shader, with both direct and global illumination. Integrating and optimizing SSS have been quite a journey, to say the least. Here's some test renders with Ten24 head scans and Digital Emily from the Wikihuman project. I ditched Emily's default eyes and used G2F's instead, along with Laticis' free fibermesh eyebrows.

    First one was with 16x16 pixel samples and took something like 18 minutes, while others are done in 8x8 and completes in about 5 minutes.

    Lesson learned from this test - I need to add a multiplier modifier to the SSS scale. Turns out the actual model size and displacement you use have quite an impact on how subsurface works. The scales I used for these are really small. Plus, I'm not completely happy with my bump/displacement code, but at least it works.

    skintest1.jpg
    1000 x 1300 - 589K
    skintest2.jpg
    1000 x 1300 - 689K
    skintest3.jpg
    1000 x 1300 - 683K
    skintest4.jpg
    1000 x 1300 - 691K
    Post edited by wowie on
  • Oso3DOso3D Posts: 15,045

    Yeah, I've noticed that with SSS and Iray. I made some generative cloud stuff, but ultimately the user really needs to tweak some values because a 1 meter cloud varies rather signifficantly from a 1 km cloud.

     

  • kyoto kidkyoto kid Posts: 41,244
    wowie said:

    Another update.

    I think I've finally worked out how to integrate SSS into the shader, with both direct and global illumination. Integrating and optimizing SSS have been quite a journey, to say the least. Here's some test renders with Ten24 head scans and Digital Emily from the Wikihuman project. I ditched Emily's default eyes and used G2F's instead, along with Laticis' free fibermesh eyebrows.

    First one was with 16x16 pixel samples and took something like 18 minutes, while others are done in 8x8 and completes in about 5 minutes.

    Lesson learned from this test - I need to add a multiplier modifier to the SSS scale. Turns out the actual model size and displacement you use have quite an impact on how subsurface works. The scales I used for these are really small. Plus, I'm not completely happy with my bump/displacement code, but at least it works.

    ..those look really good. Did you use UE or IBL Master for the GI?

  • wowiewowie Posts: 2,029
    edited May 2018
    kyoto kid said:

    ..those look really good. Did you use UE or IBL Master for the GI?

    Neither.

    aweSurface implement it's own shader based global illumination. Basically you just need to add an environment sphere then insert a HDRI into the ambient color slot (on the sphere's material). The shader based GI automatically 'picks up' any ambient surfaces and mesh emitters in the scene plus bounced lights from any diffuse surfaces.

    As previously noted, this arrangement have extra benefits.

    • The HDRI is visible in the viewport, so orientation adjustment is easier.
    • You can have multiple environment sphere, each with its own visibility tags (camera, reflections/refractions, global illumination).

    I've made my own custom ambient/emitter shader (aweEmitter) to replace omnifreaker's Simple Surface shader (the shader applied to UE2's environment sphere). It basically adds tiling controls so you can offset the textures used, effectively rotating the texture across the sphere. Since this is done in the shader, it doesn't change geometry so you can actually do adjustments in IPR. The aweEmitter also supports path traced area light and global illumination for diffuse. One extra feature I added is the ability to blur the environment texture, just in case you need it. Works quite nicely as a fake depth of field background.

    This particular test scene didn't use any HDRI. I used a single path traced area light and enable diffuse on the environment sphere. There's also a ground plane, but that's just plain diffuse with a little bit of spec/reflection. Mustakettu did suggest implementing an 'environment texture' method to basically 'pass' the texture to all surfaces in the scene, but I didn't implement it. It could be faster (I have never tried it), but you couldn't see the texture and its prone to orientation problems (like UE2). I might implement it on a later date, but only when I've figured out how to do it so it's viewable in the viewport and adjustable in IPR.

    I did a little test awhile back and was slightly, pleasantly surprised to see performance for both global illumination and reflection. With aweSurface, 3delight have very minimal performance penalty going from max trace depth of 3 to 16. Probably why the 3delight devs decided to forego the point based/photon mapping GI and implement a brute force one, which is also what's implemented in aweSurface. aweSurface do add an extra Russian roulette on the implementation.

    Post edited by wowie on
  • RAMWolffRAMWolff Posts: 10,249

    Your really amazing the soxs off of us with your development peeks and posts.  I hope this will be a full on product in the DAZ store as some point! 

  • wowiewowie Posts: 2,029
    RAMWolff said:

    Your really amazing the soxs off of us with your development peeks and posts.

    Thanks. Still a padawan learner compared to the kungfu master. :)

    I admit I'm not brave enough as him in removing shader parameters.

    RAMWolff said:

    I hope this will be a full on product in the DAZ store as some point! 

    Same here. wink

  • Mustakettu85Mustakettu85 Posts: 2,933

    Faraday, Feynman and Kheffache. =)

  • wowiewowie Posts: 2,029
    edited June 2018

    Almost there. Cropped version of the first pic.

    Need a more 'elegant' way to handle the SSS scales used. Don't really like the ears though. It looks better with displacement details and looks more 'solid' too. Enabling displacements even with SSS scale offset seems to take away most of the SSS bleed though.

    With postwork, basically raise the saturation and played with contrast a bit.

    SSS 1.jpg
    1000 x 1300 - 153K
    SSS 2.jpg
    1000 x 1300 - 186K
    SSS 3.jpg
    1000 x 1300 - 121K
    SSS 4.jpg
    1000 x 1300 - 219K
    Test SSS 1.jpg
    467 x 659 - 45K
    Postwork SSS 1 crop.jpg
    554 x 720 - 55K
    Post edited by wowie on
  • IvyIvy Posts: 7,165

    5 min animations rendered in 3dl using ue lighing ~ enjoy the film :)

  • kyoto kidkyoto kid Posts: 41,244

    ...nicely done.

    What hair did you use on her (looks familiar)

  • IvyIvy Posts: 7,165

    Thank you KK

    I used sherry hair for  g3 https://www.daz3d.com/sherry-hair-for-genesis-3-female-s

  • kyoto kidkyoto kid Posts: 41,244

    ...you're welcome.

    Sherry Hair, that's one I missed when looking for a good ponytail to use with my namesake character, looks to have some nice movement morphs,.  Ended up getting the Rochelle Ponytail as I needed something with length. Recently got the Sporty Ponytail Hair to see how that works as it looks a little closer to the original drawings I did since it is pulled back more (also has a nice single braid style).

Sign In or Register to comment.