3Delight Laboratory Thread: tips, questions, experiments

194969899100

Comments

  • wowiewowie Posts: 2,029
    edited July 2019
    Will these features be added to the SS channel or translucency?

    Subsurface.

    With the current build, translucency color is taken from the diffuse color map, correct? Will this new feature make it possible to use independent translucency color using rgb maps?

    Yes. By default, it will be using a blend between the diffuse map/color and subsurface absorption color. I haven't added it in yet, but you'll be able to blend between using the diffuse and white for the 'shallow' color, plus between subsurface absorption color and a color you pick. Choosing white and your own color will produce more uniform looking SSS.

    Yup, so I had a look and get your point:) But the thing with REYES is it can be dumbed down infinitely much and still look clean and (if you do it right) kind of cool. Well that's of course a matter of personal taste. And you can postwork the hell out of it, worst scenario=)) Kettu you had a bunch of really nice renders over there! Also loved the npr ones! Hither shore is just beautiful, and her expression really has some depth.

    I can see the appeal of dumbing down stuff to squeeze as much performance for a playbast or animation. But I can confirm what Mustakettu's saying, once you have AO (with UE2), render times between REYES and the pathtracer (even using progressive, not the render script) is negligible.

    If you do want to dumb down stuff for rendering, here's what you need to do with AWE Surface.

    • Turn off GI and Path Traced Area Light. Leave Reflections on since it's wickedly fast even without raycaching. 
    • Light the scene using only point/spot/distant light. Play with the falloff and use point lights to fake bounce lighting.
    • Don't use rendered DOF.  Render objects into separate layers and composite them in image editor and apply blur on objects/layers that's supposed to be out of focus.

    Let me test this out.

    • Rendered the Stanford Dragon, Lily and Buddha statues with those spheres in about 30 secs with full raytraced reflections at 400x600 resolution. It's goes to 3 minutes 34.79 seconds on 1280x720 for the closeup shot. On the standard 3delight path tracer (no render script, just enabled progressive rendering) at 8x8 pixel samples with a max trace depth of 16.

    Technically, reflections can be faster if AWE Surface uses environment map lookups ala Mustakettu's shadow catcher. With lookups, objects don't have to trace the environment sphere, just each other. Only works on those kind of scenes though.

    The bigger contributor to long render times are diffuse/GI. As noted, you can add a point/spot/distant (or a few) to get the look you want, using shadows strength and softness to approximate fill lighting you get with GI.

    • In this case, I added a distant light with raytraced shadows and render times goes up to 4 minutes 37.85 seconds.
    • In comparison, using GI (with 1024 samples) and not using the distant light renders in 5 minutes 40.77 seconds.
    • Use the render script so 3delight uses proper diffuse ray caching and it renders in 3 minutes 1.78 seconds.
    • Since batch renders aren't supposed to use progressive and you get better quality without it enabled anyway, let's turn off progressive in the render script. Renders in 3 minutes 43.70 seconds. It actually very close to the no-diffuse/reflection only render with the standard 3delight renderer with progressive enabled.

    So,...

    In ths case, going with old school lighting actually renders longer compared to just doing things 'properly' with fully path traced GI and reflections. Plus you get to use path traced area light for direct lighting for virtually no render time penalty.

    Now consider the time and effort of setting up those fake bounce lights to your scene compared to just applying/placing path traced area lights. Stylistic rendering approach aside, using 3delight's path tracer with GI is just simpler, easier to troubleshoot, faster to setup, and produces higher quality with generally less effort. But since it's still a rather new workflow in DS, there's a learning curve to properly using/understanding how to make best use of it.

    Currently, the bigger hit on render times comes from doing unoptimized opacity and subsurface scattering. Using those two together in close ups can easily double the render times. But these are true even in REYES with old school shaders using point based GI/SSS or even photon mapped GI. Try rendering a figure with SSS enabled using UE2 indirect light and the Subsurface or UberSurface shaders. Then add reflections. frown

    Hence the reason why no modern renderer uses those techniques anymore. Using path tracing is literally faster and less prone to problems. It simplifies the problem to just sampling, so most of the cutting edge research are done in that area. There are those who prefer denoising to cope with the noise, but I generally think of that's as a return to old tricks and tech.

    31.24 seconds.jpg
    364 x 600 - 52K
    3 minutes 34.79 seconds.jpg
    1280 x 720 - 323K
    4 minutes 37.85 seconds.jpg
    1280 x 720 - 317K
    5 minutes 40.77 seconds.jpg
    1280 x 720 - 328K
    3 minutes 1.78 seconds.jpg
    1280 x 720 - 326K
    3 minutes 43.70 seconds.jpg
    1280 x 720 - 298K
    Post edited by wowie on
  • wowiewowie Posts: 2,029
    edited July 2019
    RAMWolff said:

    I think you forgot... AWE is free... the base is anyways.  I have both the paid for version and the free version.  Not sure what the differences are but I'm sure allot could be done with the base that's over in the Freepozitory (as I still call it)  

    It's the same shader.

    The commercial pack has presets for it, along with other stuff to get the most out of it. Namely, the path traced area light shader, utility scripts, the environment sphere and a custom shader for it.

    Post edited by wowie on
  • Sven DullahSven Dullah Posts: 7,621
    wowie said:
    Will these features be added to the SS channel or translucency?

    Subsurface.

    With the current build, translucency color is taken from the diffuse color map, correct? Will this new feature make it possible to use independent translucency color using rgb maps?

    Yes. By default, it will be using a blend between the diffuse map/color and subsurface absorption color. I haven't added it in yet, but you'll be able to blend between using the diffuse and white for the 'shallow' color, plus between subsurface absorption color and a color you pick. Choosing white and your own color will produce more uniform looking SSS.

    Yup, so I had a look and get your point:) But the thing with REYES is it can be dumbed down infinitely much and still look clean and (if you do it right) kind of cool. Well that's of course a matter of personal taste. And you can postwork the hell out of it, worst scenario=)) Kettu you had a bunch of really nice renders over there! Also loved the npr ones! Hither shore is just beautiful, and her expression really has some depth.

    I can see the appeal of dumbing down stuff to squeeze as much performance for a playbast or animation. But I can confirm what Mustakettu's saying, once you have AO (with UE2), render times between REYES and the pathtracer (even using progressive, not the render script) is negligible.

    If you do want to dumb down stuff for rendering, here's what you need to do with AWE Surface.

    • Turn off GI and Path Traced Area Light. Leave Reflections on since it's wickedly fast even without raycaching. 
    • Light the scene using only point/spot/distant light. Play with the falloff and use point lights to fake bounce lighting.
    • Don't use rendered DOF.  Render objects into separate layers and composite them in image editor and apply blur on objects/layers that's supposed to be out of focus.

    Let me test this out.

    • Rendered the Stanford Dragon, Lily and Buddha statues with those spheres in about 30 secs with full raytraced reflections at 400x600 resolution. It's goes to 3 minutes 34.79 seconds on 1280x720 for the closeup shot. On the standard 3delight path tracer (no render script, just enabled progressive rendering) at 8x8 pixel samples with a max trace depth of 16.

    Technically, reflections can be faster if AWE Surface uses environment map lookups ala Mustakettu's shadow catcher. With lookups, objects don't have to trace the environment sphere, just each other. Only works on those kind of scenes though.

    The bigger contributor to long render times are diffuse/GI. As noted, you can add a point/spot/distant (or a few) to get the look you want, using shadows strength and softness to approximate fill lighting you get with GI.

    • In this case, I added a distant light with raytraced shadows and render times goes up to 4 minutes 37.85 seconds.
    • In comparison, using GI (with 1024 samples) and not using the distant light renders in 5 minutes 40.77 seconds.
    • Use the render script so 3delight uses proper diffuse ray caching and it renders in 3 minutes 1.78 seconds.
    • Since batch renders aren't supposed to use progressive and you get better quality without it enabled anyway, let's turn off progressive in the render script. Renders in 3 minutes 43.70 seconds. It actually very close to the no-diffuse/reflection only render with the standard 3delight renderer with progressive enabled.

    So,...

    In ths case, going with old school lighting actually renders longer compared to just doing things 'properly' with fully path traced GI and reflections. Plus you get to use path traced area light for direct lighting for virtually no render time penalty.

    Now consider the time and effort of setting up those fake bounce lights to your scene compared to just applying/placing path traced area lights. Stylistic rendering approach aside, using 3delight's path tracer with GI is just simpler, easier to troubleshoot, faster to setup, and produces higher quality with generally less effort. But since it's still a rather new workflow in DS, there's a learning curve to properly using/understanding how to make best use of it.

    Currently, the bigger hit on render times comes from doing unoptimized opacity and subsurface scattering. Using those two together in close ups can easily double the render times. But these are true even in REYES with old school shaders using point based GI/SSS or even photon mapped GI. Try rendering a figure with SSS enabled using UE2 indirect light and the Subsurface or UberSurface shaders. Then add reflections. frown

    Hence the reason why no modern renderer uses those techniques anymore. Using path tracing is literally faster and less prone to problems. It simplifies the problem to just sampling, so most of the cutting edge research are done in that area. There are those who prefer denoising to cope with the noise, but I generally think of that's as a return to old tricks and tech.

    Tks wowie, good points altogether! Will eventually do some testing myself:) And aweSurface is animatable natively which is very interesting. The last time we discussed it you presented a workaround for saving the surface animation with the scene. IIRC it involved saving shaderpresets for each frame and puppeteer. Would it be possible to do this via scripting?

  • wowiewowie Posts: 2,029
    edited July 2019

    Tks wowie, good points altogether! Will eventually do some testing myself:) And aweSurface is animatable natively which is very interesting. The last time we discussed it you presented a workaround for saving the surface animation with the scene. IIRC it involved saving shaderpresets for each frame and puppeteer. Would it be possible to do this via scripting?

    Not each frame, just on keyframes.

    Hmm, I haven't looked at the problem again. Let me check.

    So, I think you can use the new Properties .dsf presets output to do that. Technically, it will 'catch' every properties from the Parameters/Shaping/Pose Controls and more importantly, Surface tab and have 'Animated Range' option.

    I tried it, but yeah it doesn't do anything. I'm inclined to think DS don't save keyframe changes on Surface parameters, just the Posing/Shaping/Parameters ones.

    Saved a scene subset with both AWE Surface and dsDefaultMaterial. Same thing. So, it's obviously a DS problem.

    Post edited by wowie on
  • Sven DullahSven Dullah Posts: 7,621
    wowie said:

    Tks wowie, good points altogether! Will eventually do some testing myself:) And aweSurface is animatable natively which is very interesting. The last time we discussed it you presented a workaround for saving the surface animation with the scene. IIRC it involved saving shaderpresets for each frame and puppeteer. Would it be possible to do this via scripting?

    Not each frame, just on keyframes.

    Hmm, I haven't looked at the problem again. Let me check.

    So, I think you can use the new Properties .dsf presets output to do that. Technically, it will 'catch' every properties from the Parameters/Shaping/Pose Controls and more importantly, Surface tab and have 'Animated Range' option.

    I tried it, but yeah it doesn't do anything. I'm inclined to think DS don't save keyframe changes on Surface parameters, just the Posing/Shaping/Parameters ones.

    Saved a scene subset with both AWE Surface and dsDefaultMaterial. Same thing. So, it's obviously a DS problem.

    Yeah that seems to be the case. I'll try your workaround.

  • Mustakettu85Mustakettu85 Posts: 2,933
    RAMWolff said:
    RAMWolff said:

    why isn't DAZ ... embracing AWE fully like having it's published artists do more with it?  

    As far as I understand the situation, the only artists DAZ can "have" do anything are those who create DAZ O commissions. All the other brokered products they can either accept or refuse.

    And then historically DAZ prefers DAZ O creators to use DAZ O shaders that they can distribute to the end-user for free. For example, the first UberSurface and AoA Subsurface are DAZ O; UberSurface2 and AWE are not. They are paid addons. 

    And then what Sven said comes into play - oldschool stuff may not necessarily be a gazillion times faster or easier to use, but it's what a lot of folks are used to. And, again, free.

    I think you forgot... AWE is free... the base is anyways.  I have both the paid for version and the free version.  Not sure what the differences are but I'm sure allot could be done with the base that's over in the Freepozitory (as I still call it)  

    You're right, I literally forgot about the free version because I was trying to think business. 

    This doesn't really change things much because even the free release is still Wowie's work that he owns fully. DAZ couldn't rely on it always being there for the users - say, if Wowie wanted, he could pull it any day.

    In short, there has to be some sort of an agreement between DAZ and Wowie before DAZ could ask their artists to use aweSurface for extra 3Delight materials.

    Mind you, I said "extra" because the majority of the 3Delight users remaining in DAZverse would expect to see shaders familliar to them.

    Which will bring the product costs up, obviously. Even more because DAZ would first need to have Wowie teach the artists how to use aweSurface properly.

    It's all quite an unlikely scenario.

  • Mustakettu85Mustakettu85 Posts: 2,933
    But the thing with REYES is it can be dumbed down infinitely much and still look clean and (if you do it right) kind of cool.

    Keyword: do it right. Artist time. Other than that, yes you can do super cool looking super fast cel-shaded stuff in REYES :)

    Kettu you had a bunch of really nice renders over there! Also loved the npr ones! Hither shore is just beautiful, and her expression really has some depth.

    Thanks, man.

    I have to say, though, that Hither Shore is a photo. Hard to fail with a model like Jen (my best friend, coincidentally). Postprocessing, you say?.. yes it exists.

    Let me guess... The dA website used their new dark theme to show my gallery to you? Not the old ugly-but-readable khaki one?

     

  • Sven DullahSven Dullah Posts: 7,621
     
    Kettu you had a bunch of really nice renders over there! Also loved the npr ones! Hither shore is just beautiful, and her expression really has some depth.

    Thanks, man.

    I have to say, though, that Hither Shore is a photo. Hard to fail with a model like Jen (my best friend, coincidentally). Postprocessing, you say?.. yes it exists.

     

    Haha you got me thereyes Nevertheless, it's beautiful, and she is too:) I use to say once in a while that it's easy being human and impossible to fake it.

  • Mustakettu85Mustakettu85 Posts: 2,933
     
    Kettu you had a bunch of really nice renders over there! Also loved the npr ones! Hither shore is just beautiful, and her expression really has some depth.

    Thanks, man.

    I have to say, though, that Hither Shore is a photo. Hard to fail with a model like Jen (my best friend, coincidentally). Postprocessing, you say?.. yes it exists.

     

    Haha you got me thereyes Nevertheless, it's beautiful, and she is too:) I use to say once in a while that it's easy being human and impossible to fake it.

    Thank you; she definitely is. 

    We were't technically thinking about "h00manz", though ;) This is where the title comes from: http://tolkiengateway.net/wiki/Galadriel's_Song_of_Eldamar

  • Sven DullahSven Dullah Posts: 7,621
    edited July 2019

    @Mustakettu85

    I'm a bit confused about your shadowcatcher. Downloaded the latest HDRI from HDRI Haven https://hdrihaven.com/hdri/?h=mall_parking_lot

    And now I notice that it appears mirrored when using your stuff. I used the subset with your env. sphere, converted to wowie's Environment shader. User error or something else? surprise

    image

    Mirrored HDR.png
    1280 x 720 - 1M
    Post edited by Sven Dullah on
  • wowiewowie Posts: 2,029

    This doesn't really change things much because even the free release is still Wowie's work that he owns fully. DAZ couldn't rely on it always being there for the users - say, if Wowie wanted, he could pull it any day.

    I have some reservations about this, but probably not because of what some people may think. Bottom line, the current state of affairs is probably the best of what can be done in this moment. I keep all rights and maintain the code base.

    I am thinking about sharing the source code at some point. Mainly to allow people to create their own networks, if they need to. However, they will need to be quite proficient in RSL and Shader Builder, or I have to find a way and allocate time to port the shader into Shader Mixer.

    The other alternative is to create a new shader that mimics Iray's properties to the tee, at least to the extent of what can be transported over to 3delight. Still, there's still the problem of what type of shader to be used (Shader Builder/Shader Mixer).

    Seeing DAZ never bothered doing a proper documentation concerning importing Shader Builder macros/functions into Shader Mixer, the most feasible option is still for a Shader Builder project. Unfortunately, that means relatively steep learning curve for users.

  • Mustakettu85Mustakettu85 Posts: 2,933
    edited July 2019

    @Mustakettu85

    I'm a bit confused about your shadowcatcher. Downloaded the latest HDRI from HDRI Haven https://hdrihaven.com/hdri/?h=mall_parking_lot

    And now I notice that it appears mirrored when using your stuff. I used the subset with your env. sphere, converted to wowie's Environment shader. User error or something else? surprise

    Hmm interesting =) Does it look that way on wowie's envsphere or other skydomes?

    See, mine's a simple DS sphere with flipped normals (as much of a fixed reference point I could come up with), and the "kettuworld" coordinate system was then made to match that. And then you can throw small DS spheres (with your HDRI applied) around your scene to see easily what the sky is like anywhere.

    I simply never used an HDRI with letters =D Only skies. But it could make sense it ends up mirrored because we're inside that sphere.

    You could flip the HDRI in an image editor like a recent gimp build.

    PS I first thought I was seeing things and those letters can't be Cyrillic... and then I remembered Greg has Ukrainian collaborators now =)

    Post edited by Mustakettu85 on
  • Mustakettu85Mustakettu85 Posts: 2,933
    wowie said:

    I am thinking about sharing the source code at some point. Mainly to allow people to create their own networks, if they need to.

    That makes sense, but you would probably want to run a poll beforehand to gauge how many people might be interested.

  • Sven DullahSven Dullah Posts: 7,621
    edited July 2019

    @Mustakettu85

    I'm a bit confused about your shadowcatcher. Downloaded the latest HDRI from HDRI Haven https://hdrihaven.com/hdri/?h=mall_parking_lot

    And now I notice that it appears mirrored when using your stuff. I used the subset with your env. sphere, converted to wowie's Environment shader. User error or something else? surprise

    Hmm interesting =) Does it look that way on wowie's envsphere or other skydomes?

    No, sorry, only yours=)

    See, mine's a simple DS sphere with flipped normals (as much of a fixed reference point I could come up with), and the "kettuworld" coordinate system was then made to match that. And then you can throw small DS spheres (with your HDRI applied) around your scene to see easily what the sky is like anywhere.

    I simply never used an HDRI with letters =D Only skies. But it could make sense it ends up mirrored because we're inside that sphere.

    You could flip the HDRI in an image editor like a recent gimp build.

    I tried a horisontal flip in LIE but the car ended up being in shadows. So I just flipped the whole render and ignored the steering wheel being on the wrong side:)

    PS I first thought I was seeing things and those letters can't be Cyrillic... and then I remembered Greg has Ukrainian collaborators now =)

    Yeah I saw things too at firstlaugh Already logged in on HDRI Haven and wrote a comment...luckily I checked the picture before I submitted LOL.

    Post edited by Sven Dullah on
  • Sven DullahSven Dullah Posts: 7,621
    wowie said:

    I am thinking about sharing the source code at some point. Mainly to allow people to create their own networks, if they need to.

    That makes sense, but you would probably want to run a poll beforehand to gauge how many people might be interested.

    I would definitely be interested, but I would be pretty helpless with shadowbuilder. Especially since documentation is pretty much non existant. Porting it to shadowmixer would be nice indeed, but then again, SM is not exactly the most stable application aroundfrown

  • Takeo.KenseiTakeo.Kensei Posts: 1,303
    edited July 2019

     

    wowie said:

    I am thinking about sharing the source code at some point. Mainly to allow people to create their own networks, if they need to.

    That makes sense, but you would probably want to run a poll beforehand to gauge how many people might be interested.

    I doubt you'll have a lot of RSL expert to play with it. There is an example to build shadermixer block in the SDK. It would be more user friendly but requires more work

    You could build an Iray equivalent shader with 3DL (that's what I think you should have done from the beginning) but since it seems you didn't get message passing between shaders to work, it will require users to put maps manually and correctely to be consistent (or at least some scripts) . At least one usefull function has also disappered from the scripting since DS3 which would have helped and I don't know if there are more that disappered or no more functional with latest DS (I haven't used 3delight for a long time)

     

    Post edited by Takeo.Kensei on
  • wowiewowie Posts: 2,029
    edited July 2019

    I doubt you'll have a lot of RSL expert to play with it. There is an example to build shadermixer block in the SDK. It would be more user friendly but requires more work

    Can you point me to that example? Is it the DS script SDK or DS SDK?

    Honestly though, outside of a shadow catcher, I don't see a lot of value for environment map lookups. I actually found out and fixed the issue with subsurface noise and HDRI lit scene while messing around with the new subsurface code. With the revised version, I'm still getting clean SSS even with just 256 samples and subsurface weight set to the bare minimum.

    Post edited by wowie on
  • Takeo.KenseiTakeo.Kensei Posts: 1,303
    wowie said:

    I doubt you'll have a lot of RSL expert to play with it. There is an example to build shadermixer block in the SDK. It would be more user friendly but requires more work

    Can you point me to that example? Is it the DS script SDK or DS SDK?

    Honestly though, outside of a shadow catcher, I don't see a lot of value for environment map lookups. I actually found out and fixed the issue with subsurface noise and HDRI lit scene while messing around with the new subsurface code. With the revised version, I'm still getting clean SSS even with just 256 samples and subsurface weight set to the bare minimum.

    It's in the DS SDK. It's called curvebrick. You'll need Visual Studio 2010 compiler

    I remember having also tried to directely create a brick inside DS and I think I got somewhat limited successfull results but don't really remember the details of what works and what doesn't. For the moment, let's just say that for complex shader bricks the DS SDK is the way.

    Message passing is usefull if you target performance at some point. Without environment lookup you don't have MIS for example.I know I made some complicated thing few years ago and in order to get some performance gain I remember I did not just pass environment maps. You may need to pass values and vectors

    There is also the user friendliness factor where you could have a central panel with overview of many Quality/Tonemapping/Environment/User choices (ex AOV output) which centralize a bunch of common variables a bit like the Iray render panel. Instead of having to setup everything manually, for each shader, each shader instance will get the info from the central collecting point

    Ex : you have three different specular models available in your shaders. Instead of going through each shader to set the specular model, you set it once in the central panel and every shader in the scene are updated

  • Mustakettu85Mustakettu85 Posts: 2,933

    No, sorry, only yours=)

    You don't have to say sorry, it's just a DS primitive :)

    I tried a horisontal flip in LIE but the car ended up being in shadows. So I just flipped the whole render and ignored the steering wheel being on the wrong side:)

    Well, for the steering wheel flipping the render works because in some countries, it _is_ on the "wrong side". If you had a licence plate, though... :)

    And I'm not sure LIE handles HDR images well - in my case it seems to create regular sRGB png copies in the temp folder.

    On Windows, Picturenaut will flip any HDR for you (the command is located in the "rotate image" submenu, though, so not exactly evident). No idea if there's a similar lightweight app for Mac, but again, GIMP 2.10 has a Mac version and it supports HDRI perfectly well. 

  • wowiewowie Posts: 2,029

    It's in the DS SDK. It's called curvebrick. You'll need Visual Studio 2010 compiler

    I remember having also tried to directely create a brick inside DS and I think I got somewhat limited successfull results but don't really remember the details of what works and what doesn't. For the moment, let's just say that for complex shader bricks the DS SDK is the way.

    It's a start in the right direction.

    Message passing is usefull if you target performance at some point. Without environment lookup you don't have MIS for example.I know I made some complicated thing few years ago and in order to get some performance gain I remember I did not just pass environment maps. You may need to pass values and vectors

    It is useful, but I don't actually use environment maps outside of test scenes. DAZ own' Environment Shader Builder macro relied on Nn/Nf vector manipulation for the lookup. When I was looking at Pixar's docs, they actually allow passing a matrix as 'envspace', but it seems 3delight devs didn't allow that with 3delight's trace ().

    For the shadow catcher, you can get away with using environment (), but for a surface shader I don't think it will do. Environment () don't have parameters like brdf () and trace (), plus it doesn't do MIS, just plain importance sampling. To do proper MIS with it, you'll need to write your own importance sampling code, like PDFs and G masking/shadowing term, do the proper weighting necessary. I think Matt Ebb tried that once, though I think he was using gather (). He found 3delight's inbuilt trace () is just faster and easier to work with.

    Honestly, if DS have the same coordinate system as 3delight, then we won't have this problem.

    What would be the ideal solution is for DS to have a functional viewport OpenGL environment sphere or sun/sky environment that it can pass to whatever renderer plugin is selected. But looking at 4.11, that doesn't exist (even with iray). That's a setup that's been available in Maya/Max/Houdini/C4D/Blender/Modo or even game engines like Unity and UE for ages.

    There is also the user friendliness factor where you could have a central panel with overview of many Quality/Tonemapping/Environment/User choices (ex AOV output) which centralize a bunch of common variables a bit like the Iray render panel. Instead of having to setup everything manually, for each shader, each shader instance will get the info from the central collecting point

    Ex : you have three different specular models available in your shaders. Instead of going through each shader to set the specular model, you set it once in the central panel and every shader in the scene are updated

    I actually agree. It would be best to have those all in one place, which is why I went and place some on a 'dumb' ambient light shader.

    Since I wanted to allow viewport response when you shift the horizontal offset of the HDRI, message passing that offset was not doable. Well, technically you can setup a DS custom script on the light shader to fetch the offset parameter off the sphere, but I think that's prone to problems.

    You also need to keep in mind there are certain limitations in place (when doing it via pure RSL/RiSpec). For example, shutter time is controlled via Motion Blur settings in the renderer's option tab. Since it's a RiOption, light/surface shaders can only read this value, but can't change/write it. The only feasible solution is to 'split' the setting - one inside the light shader which can only affect scene exposure but have no effect on motion blur and vice versa, the one in the renderer's option which only affect motion blur but not scene exposure.

    AOV output is nice, but I think those should always be in the renderer's options. However, within the RSL framework, AOV output is at least two parts working together. Shaders need to be explicitly written to output AOV and identifier attributes. The second part is the RIB export and that means doing things via the render script. Currently, it doesn't have the code to export AOV (yet). Mustakettu's the one working on that, so you should ask her.

    As for BRDF choices, well, I think all other renderers put those in materials/shaders. There are overrides, but I think those mainly uses AOV and does not actually change the BRDF used by specular/reflection lobes.

    Ultimately, what I'm saying is there's limitation in RSL/RiSpec and DS that makes certain things impossible or just downright hard.

    Wildly thinking here, but the best thing would be a USD export for DS. It will literally open up a lot of possibilities. Renderer options for one. There's also easier lookdev/scene management, not to mention you can just have more memory to the USD standalone/renderer.

  • Takeo.KenseiTakeo.Kensei Posts: 1,303
    wowie said:
     

    It is useful, but I don't actually use environment maps outside of test scenes. DAZ own' Environment Shader Builder macro relied on Nn/Nf vector manipulation for the lookup. When I was looking at Pixar's docs, they actually allow passing a matrix as 'envspace', but it seems 3delight devs didn't allow that with 3delight's trace ().

    For the shadow catcher, you can get away with using environment (), but for a surface shader I don't think it will do. Environment () don't have parameters like brdf () and trace (), plus it doesn't do MIS, just plain importance sampling. To do proper MIS with it, you'll need to write your own importance sampling code, like PDFs and G masking/shadowing term, do the proper weighting necessary. I think Matt Ebb tried that once, though I think he was using gather (). He found 3delight's inbuilt trace () is just faster and easier to work with.

    You miss the point. IS or MIS is not really important. I'm just saying that message passing as well as co-shader access and other RSL 2.0 avanced functionnallity are usefull. I'm the very lazy type that hates to do things twice when there is a way do prevent that.

    You don't need to pass map right now because you use a solid sphere I got it. And that gives me the perfect example (or that's what I hope because I will not dig some archive) : If I remember correctely, Maya Sun and Sky use a combination of shaders that work together and I think they query informations that should be outside of the shader (the usual day/time/HH:MM:SS tranformed or not to suit the shaders). I don't have other Maya examples in head (and that's old story for me) but there are certainly other examples in the 3DL shader samples if you dig

    Note : I dont think Matt Ebb used trace because it was faster. He just used the standard maya light that has trace but also PBGI, Occlusion, and photon mapping for the first version. The second version used advanced RSL 2.0 with coshader access to retrieve light infos and I don't remember there was any trace() used. I think there was rather a dumb shader to collect datas

    wowie said:

    Honestly, if DS have the same coordinate system as 3delight, then we won't have this problem.

    What would be the ideal solution is for DS to have a functional viewport OpenGL environment sphere or sun/sky environment that it can pass to whatever renderer plugin is selected. But looking at 4.11, that doesn't exist (even with iray). That's a setup that's been available in Maya/Max/Houdini/C4D/Blender/Modo or even game engines like Unity and UE for ages.

    Yes and no. Iray is a GPU renderer and the philosophy is to run a preview render so that you can have a better idea of the final output

    3DL has the IPR. Having an OpenGL preview was good in the old times where CPU/GPU power was not like todays. That's just an old legacy

    Even Pixar made a GPU real time preview tools for lightning purpose before rendering final frames

    And really, there is an easy solution for the coordinate system. Since you speak of RIB export, you can simply export the RIB and call the 3DL renderer. It won't be done by DS but will be correct

    You have to eventually pass the IBL sphere rotations to the light shader (after hiding the sphere at rendertime)

    wowie said:

    There is also the user friendliness factor where you could have a central panel with overview of many Quality/Tonemapping/Environment/User choices (ex AOV output) which centralize a bunch of common variables a bit like the Iray render panel. Instead of having to setup everything manually, for each shader, each shader instance will get the info from the central collecting point

    Ex : you have three different specular models available in your shaders. Instead of going through each shader to set the specular model, you set it once in the central panel and every shader in the scene are updated

    I actually agree. It would be best to have those all in one place, which is why I went and place some on a 'dumb' ambient light shader.

    Since I wanted to allow viewport response when you shift the horizontal offset of the HDRI, message passing that offset was not doable. Well, technically you can setup a DS custom script on the light shader to fetch the offset parameter off the sphere, but I think that's prone to problems.

    I don't really see the problem but well I don't have my head into it

    wowie said:

    You also need to keep in mind there are certain limitations in place (when doing it via pure RSL/RiSpec). For example, shutter time is controlled via Motion Blur settings in the renderer's option tab. Since it's a RiOption, light/surface shaders can only read this value, but can't change/write it. The only feasible solution is to 'split' the setting - one inside the light shader which can only affect scene exposure but have no effect on motion blur and vice versa, the one in the renderer's option which only affect motion blur but not scene exposure.

    I don't understand where this subject comes from but OK. I don't understand where the problem is here either. Shaders mustn't write any renderer parameter. You do it via the renderer panel setting. The shaders only read datas at rendertime and these values won't change during render so I don't see why there would be a need for writing render settings from the shaders while rendering. I guess you want to implement a scene exposure somewhat and I still don't see the problem. Either you correct output values in linear space or you implement an imager shader (and I don't know if that workflow is correct) but that should work because you already read required datas at rendertime before the beginning of the rendering

    wowie said:

    AOV output is nice, but I think those should always be in the renderer's options. However, within the RSL framework, AOV output is at least two parts working together. Shaders need to be explicitly written to output AOV and identifier attributes. The second part is the RIB export and that means doing things via the render script. Currently, it doesn't have the code to export AOV (yet). Mustakettu's the one working on that, so you should ask her.

    Not really, it's not a request. AOV is just one classical example that came to my mind and is not really difficult to implement. The only thing I will say is that if you plan to do AOV outputs, and think of making shadermixer blocks, think the strategy carefully

    (BTW this discussion made me think of an old shader I wrote and could be usefull for Iray. So I'll certainly dig a bit of some old RSL)

    wowie said:

    As for BRDF choices, well, I think all other renderers put those in materials/shaders. There are overrides, but I think those mainly uses AOV and does not actually change the BRDF used by specular/reflection lobes.

    This was just an example. It was because there are tons of shading models and I implemented them in the same one shader and I could easily switch between the models

     

    wowie said:

    Ultimately, what I'm saying is there's limitation in RSL/RiSpec and DS that makes certain things impossible or just downright hard.

    Wildly thinking here, but the best thing would be a USD export for DS. It will literally open up a lot of possibilities. Renderer options for one. There's also easier lookdev/scene management, not to mention you can just have more memory to the USD standalone/renderer.

    Expanding the render setting panel to add whatever options is doable. You may have more hard time with the shadermixer brick at the beginning

    I was wildly thinking too. MDL is an other option but 3DL with RSL version (new version seems to only have OSL) is not supported and I don't know what would be needed for it

     

  • wowiewowie Posts: 2,029

    You miss the point. IS or MIS is not really important. I'm just saying that message passing as well as co-shader access and other RSL 2.0 avanced functionnallity are usefull. I'm the very lazy type that hates to do things twice when there is a way do prevent that.

    Ah OK. I understand your point now. Unfortunately Shader Builder don't accept RSL 2.0 syntax. I guess you can bypass it: write your RSL 2 shader and compiled it with shaderdl directly. I have no idea if RSL 2.0 compiled shaders will work with DS though. There's no reason why it shouldn't, but I've never tried it so I simply can't answer that. I also have no idea what limitations or requirements for that.

    Since I'm still using Shader Builder, all the written shader code is still RSL 1.2. Why stick to RSL 1.2? It just makes it easier to maintain code, troubleshoot bugs and add stuff.

    Yes and no. Iray is a GPU renderer and the philosophy is to run a preview render so that you can have a better idea of the final output

    3DL has the IPR. Having an OpenGL preview was good in the old times where CPU/GPU power was not like todays. That's just an old legacy

    Even Pixar made a GPU real time preview tools for lightning purpose before rendering final frames

    I disagree with this. OpenGL with some hybrid ray tracing or even screen space hacks is still going to be faster than a preview with a GPU path tracer. It allows you to get even faster feedback and make changes on the fly. Both 3dsmax, Maya have made efforts to get viewport render as close as possible to what you see in Arnold. Even so far as to make sure procedural OSL shaders (noise generation) rendered the same. Also true with Houdini.

    My point is there is always a need for a fast viewport approximation. I can't animate/scrub the timeline, set keyframes in iray preview mode. There's general progress in that area on other apps though.

     I guess you want to implement a scene exposure somewhat and I still don't see the problem. Either you correct output values in linear space or you implement an imager shader (and I don't know if that workflow is correct) but that should work because you already read required datas at rendertime before the beginning of the rendering

    Well, yeah. I want as close as possible to live edits while rendering or at least IPR. Hence why I wrote the necessary shader support to change scene exposure (on the fly while rendering). As I noted earlier, it's controllable via a light shader and changes are sent via message passing to the surface shaders (instantly). You can resort to light linking if you want to control specific portions of the scene. Only problem is when I used physical camera settings (ISO/F-stop/shutter time) instead of just a general exposure offset.

    Made a very quick and dirty video to show this with 3delight IPR.

    Outside of geometry, visibility and path traced area lights changes, it works out pretty well as a fast preview mode. I still can't animate, make geometry changes and scrub the timeline, but I can't do those in iray either.

  • wowiewowie Posts: 2,029

    Btw, AWE Surface 1.3 is out. Check the freebie thread and my Google Drive for the update.

    I'll go into details on some of the new update later.

  • Takeo.KenseiTakeo.Kensei Posts: 1,303
    wowie said:

    You miss the point. IS or MIS is not really important. I'm just saying that message passing as well as co-shader access and other RSL 2.0 avanced functionnallity are usefull. I'm the very lazy type that hates to do things twice when there is a way do prevent that.

    Ah OK. I understand your point now. Unfortunately Shader Builder don't accept RSL 2.0 syntax. I guess you can bypass it: write your RSL 2 shader and compiled it with shaderdl directly. I have no idea if RSL 2.0 compiled shaders will work with DS though. There's no reason why it shouldn't, but I've never tried it so I simply can't answer that. I also have no idea what limitations or requirements for that.

    Since I'm still using Shader Builder, all the written shader code is still RSL 1.2. Why stick to RSL 1.2? It just makes it easier to maintain code, troubleshoot bugs and add stuff.

    I had no problem with RLS 2.0 inside shaderbuilder. The only limitation that was irritating was from 3DL which didn't implement full RSL 2.0. There is a notice of the limitation in the doc

    wowie said:
    Yes and no. Iray is a GPU renderer and the philosophy is to run a preview render so that you can have a better idea of the final output

    3DL has the IPR. Having an OpenGL preview was good in the old times where CPU/GPU power was not like todays. That's just an old legacy

    Even Pixar made a GPU real time preview tools for lightning purpose before rendering final frames

    I disagree with this. OpenGL with some hybrid ray tracing or even screen space hacks is still going to be faster than a preview with a GPU path tracer. It allows you to get even faster feedback and make changes on the fly. Both 3dsmax, Maya have made efforts to get viewport render as close as possible to what you see in Arnold. Even so far as to make sure procedural OSL shaders (noise generation) rendered the same. Also true with Houdini.

    My point is there is always a need for a fast viewport approximation. I can't animate/scrub the timeline, set keyframes in iray preview mode. There's general progress in that area on other apps though.

    Let's agree to disagree then but just in the case of IBL preview. I personnaly find little interest to IBL previw in old OpenGL viewport. That doesn't show you how the light behaves.

    If we're talking about fast real time PBR preview development, that's an other story.

    wowie said:
     I guess you want to implement a scene exposure somewhat and I still don't see the problem. Either you correct output values in linear space or you implement an imager shader (and I don't know if that workflow is correct) but that should work because you already read required datas at rendertime before the beginning of the rendering

    Well, yeah. I want as close as possible to live edits while rendering or at least IPR. Hence why I wrote the necessary shader support to change scene exposure (on the fly while rendering). As I noted earlier, it's controllable via a light shader and changes are sent via message passing to the surface shaders (instantly). You can resort to light linking if you want to control specific portions of the scene. Only problem is when I used physical camera settings (ISO/F-stop/shutter time) instead of just a general exposure offset.

    Made a very quick and dirty video to show this with 3delight IPR.

    Outside of geometry, visibility and path traced area lights changes, it works out pretty well as a fast preview mode. I still can't animate, make geometry changes and scrub the timeline, but I can't do those in iray either.

    I don't know what you did at the end of the video and thus don't know what I should have seen. Since you didn't capture your mouse movement, that didn't help too

  • RAMWolffRAMWolff Posts: 10,249
    wowie said:

    Btw, AWE Surface 1.3 is out. Check the freebie thread and my Google Drive for the update.

    I'll go into details on some of the new update later.

    Not seeing it in the Freebie thread!  

  • Sven DullahSven Dullah Posts: 7,621
    RAMWolff said:
    wowie said:

    Btw, AWE Surface 1.3 is out. Check the freebie thread and my Google Drive for the update.

    I'll go into details on some of the new update later.

    Not seeing it in the Freebie thread!  

    wowie said:

    » show previous quotes

    It will always be the aweSurface DS App Folder Files.zip in the root folder. You can find older ones in their own folders with version names as the folder name.

  • RAMWolffRAMWolff Posts: 10,249
    RAMWolff said:
    wowie said:

    Btw, AWE Surface 1.3 is out. Check the freebie thread and my Google Drive for the update.

    I'll go into details on some of the new update later.

    Not seeing it in the Freebie thread!  

    wowie said:

    » show previous quotes

    It will always be the aweSurface DS App Folder Files.zip in the root folder. You can find older ones in their own folders with version names as the folder name.

    Still not seeing it:

    https://drive.google.com/drive/folders/1iHeu7z2pChdgmimFbu0rOCLt5nXUFVcK

    If it's there it's not named correctly!  Thank you

  • Sven DullahSven Dullah Posts: 7,621
    edited July 2019
    RAMWolff said:
    RAMWolff said:
    wowie said:

    Btw, AWE Surface 1.3 is out. Check the freebie thread and my Google Drive for the update.

    I'll go into details on some of the new update later.

    Not seeing it in the Freebie thread!  

    wowie said:

    » show previous quotes

    It will always be the aweSurface DS App Folder Files.zip in the root folder. You can find older ones in their own folders with version names as the folder name.

    Still not seeing it:

    https://drive.google.com/drive/folders/1iHeu7z2pChdgmimFbu0rOCLt5nXUFVcK

    If it's there it's not named correctly!  Thank you

    aweSurface DS App Folder Files...herelaugh

    image

    awe build 1.3.png
    1863 x 796 - 189K
    Post edited by Sven Dullah on
  • wowiewowie Posts: 2,029
    edited July 2019
    I don't know what you did at the end of the video and thus don't know what I should have seen. Since you didn't capture your mouse movement, that didn't help too

    Yeah. Sorry. I had the 'Light' tab as a separate window and Windows 10 Game Bar didn't capture that. Basically, I was tinkering with the values in this 'light'. Playing with temperature, tone mapping values, and of course, exposure.

    Obviously couldn't do lens effects or vignetting, that will be on the imager shader.

    Capture.JPG
    323 x 469 - 33K
    Post edited by wowie on
  • RAMWolffRAMWolff Posts: 10,249
    RAMWolff said:
    RAMWolff said:
    wowie said:

    Btw, AWE Surface 1.3 is out. Check the freebie thread and my Google Drive for the update.

    I'll go into details on some of the new update later.

    Not seeing it in the Freebie thread!  

    wowie said:

    » show previous quotes

    It will always be the aweSurface DS App Folder Files.zip in the root folder. You can find older ones in their own folders with version names as the folder name.

    Still not seeing it:

    https://drive.google.com/drive/folders/1iHeu7z2pChdgmimFbu0rOCLt5nXUFVcK

    If it's there it's not named correctly!  Thank you

    aweSurface DS App Folder Files...herelaugh

    image

    Thank you

Sign In or Register to comment.