3Delight Laboratory Thread: tips, questions, experiments

15758606263100

Comments

  • Oso3DOso3D Posts: 15,045

    Some of this helps tip me off to some of the problems I was having with rendering hair, and I'm realizing I did things in a somewhat clunky way (I was using AoA light flagging to try to simplify hair occlusion, which ... now seems silly)

     

  • wowiewowie Posts: 2,029

    Some of this helps tip me off to some of the problems I was having with rendering hair, and I'm realizing I did things in a somewhat clunky way (I was using AoA light flagging to try to simplify hair occlusion, which ... now seems silly)

    Light flagging, light category or light linking can be a pretty powerful tool. In fact, I know most VFX artist uses it a lot and won't call a renderer production ready if the feature doesn't exist. However, it is easy to fall into the trap of using light linking to compensate for improper or perhaps more aptly, implausible material setup. Unfortunately, in the pre-PBR workflow, that's pretty common practice.

    With PBR/physically plausible shading, the material/surface is doing most of the work in getting 'the look'. Setup correctly, the surface will work with just about any lighting scenarios. The only problem is when you use delta/punctual light (ie point, spot lights) that has no area information what so ever. Well, two problems actually. One is that incoming light doesn't wrap around the object like what you expect from a large light emitter and two, the specular will be too 'focused' on the object with a large emitter. Add improper light falloff (or no falloff) and all hell breaks loose. In fact, even Disney's artists fell into the trap and tried to adjust material settings to compensate. The way to go is to use a proper mesh light. With a mesh light physically plausible material always work correctly.

    Unfortunately, the only publicly available area light for 3delight and DS is UberArealight. The problem with UberArea light is that the specular emission gets killed if you use an ambient occlusion light (UE2, AoA Ambient and even the scripted renderer point based occlusion light). So it only fixes one of the problem (the light wrapping). Kettu's area lights work properly in that manner. But until they're out, I prefer to work with the delta lights. Plus adjustment to delta lights (intensity, falloff and position) can be made on the fly in IPR. With area lights, you need to restart the render everytime you make an adjustment to intensity or move the lights around.

  • Oso3DOso3D Posts: 15,045

    Question about bounce...

    Can you do an end run around bounce lights by giving materials raytrace reflection with high blur?

     

  • wowiewowie Posts: 2,029

    Question about bounce...

    Can you do an end run around bounce lights by giving materials raytrace reflection with high blur?

    Actually, that's the way to go for indirect specular. It doesn't break PBR conventions since even the roughest dieletric IS reflective near 90 degrees. Generally, reflection blur samples should be around 8. Since it will pickup noise from other things (shadows, occlusion), you need to make sure those are clean first.

    As I noted a while back, enabling reflections allows you to pickup other surfaces highlights. It's not a diffuse response though, so it's not a complete solution. For the diffuse, you can just use a tinted, weak diffuse only AO light or go with UE2's IDL mode if you want diffuse color bleeds. UE2 IDL can be accelerated with kettu's script (bundled with her kit) via the use of ray caching. It is slower with new 3delight builds and DS 4.8 onwards, but I think that's because of how UE2 was written (and not 3delight's fault).

    Then there's the old school trick - adding diffuse only point light(s) with limited falloff and/or light linking.

  • Mustakettu85Mustakettu85 Posts: 2,933

    Confirmed: specular response on bumpy surfaces when DoF is on is different to the same surface when DoF is off.

    I first noticed it with my shaders and spend a LOAD of time trying to figure out why this could be.

    Today I decided to test with US2, and it's the same. Doesn't matter, if it's displacement (traced) or bump.

    I wonder if it's intentional, or if there's something missing in the DS output when DoF is off. I'm thinking about asking on the 3DL forums.

    The renders attached are a simple sphere (subdivided, legacy Catmull-Clark) with US2 and a bump map (plus/minus 0.01). Specular roughness = 0.25, reflection (raytraced) roughness = 0.10 (16 samples), fresnel = on (default settings). On the left, DoF is on; on the right, it's off.

    http://www.mediafire.com/download/p787ire8bl8crm8/noise4000_24bpp.png - this is the bump map. // made in Paint.NET; if anyone wants to use the map, it's CC0 //

    ubersurf2_bump_dof.png
    512 x 512 - 312K
    ubersurf2_bump_nodof.png
    512 x 512 - 174K
  • wowiewowie Posts: 2,029
    edited July 2016

    Confirmed: specular response on bumpy surfaces when DoF is on is different to the same surface when DoF is off.

    I first noticed it with my shaders and spend a LOAD of time trying to figure out why this could be.

    Today I decided to test with US2, and it's the same. Doesn't matter, if it's displacement (traced) or bump.

    I wonder if it's intentional, or if there's something missing in the DS output when DoF is off. I'm thinking about asking on the 3DL forums.

    The renders attached are a simple sphere (subdivided, legacy Catmull-Clark) with US2 and a bump map (plus/minus 0.01). Specular roughness = 0.25, reflection (raytraced) roughness = 0.10 (16 samples), fresnel = on (default settings). On the left, DoF is on; on the right, it's off.

    http://www.mediafire.com/download/p787ire8bl8crm8/noise4000_24bpp.png - this is the bump map. // made in Paint.NET; if anyone wants to use the map, it's CC0 //

    I noticed that a long time ago. wink Well, the bump/displacement relation to specular/reflection, not the DOF part.

    Post edited by wowie on
  • Oso3DOso3D Posts: 15,045

    What the what? GD it.

    It's shitake like that that pushes me to Iray. Bah.

     

  • Mustakettu85Mustakettu85 Posts: 2,933
    wowie said:

    I noticed that a long time ago. wink Well, the bump/displacement relation to specular/reflection, not the DOF part.

     

    Bump/displacement by themselves - yup, that's logical. But why does DoF come into play... even in "oldschool" shaders.

    What the what? GD it.

    It's shitake like that that pushes me to Iray. Bah.

     

    Hey Will, I remember you said your video card makes Iray render more or less fast. Could you please test a similar sort of sphere in Iray with the bump I linked to above? With and without DoF (depth of field). Thanks!

  • wowiewowie Posts: 2,029
    edited July 2016

    Bump/displacement by themselves - yup, that's logical. But why does DoF come into play... even in "oldschool" shaders.

    Implementation quirks, maybe? wink Is that with the built-in renderer or the standalone (and RIB export)? Never tried DOF in 3DfM so I don't know if that behaviour is also present there.

    Edit:

    Just tried it with 4.7 and saw the same thing.

    Post edited by wowie on
  • Oso3DOso3D Posts: 15,045

    Mustakettu: Going to try to test that for you again today.

    I tried yesterday...  aaaand the power kept going out.

     

  • Oso3DOso3D Posts: 15,045

    I tried to replicate as much as possible. Verdict, no appreciable difference between DOF and no DOF in Iray. DOF takes very slightly longer to render.

    Left is with DOF, right is without, both with bump map set to 5 (had to guess a bit).

    Balls.png
    2000 x 1000 - 2M
  • Mustakettu85Mustakettu85 Posts: 2,933
    wowie said:

    Implementation quirks, maybe? wink Is that with the built-in renderer or the standalone (and RIB export)? Never tried DOF in 3DfM so I don't know if that behaviour is also present there.

    Edit:

    Just tried it with 4.7 and saw the same thing.

    The RIBs that are exported from DS render like that in the standalone, too. I tried in Maya, but can't get the DoF in focus, silly me =) I guess it's worth it to just export to RIB from Maya and compare the syntax to the DS RIBs. 

     

    I tried to replicate as much as possible. Verdict, no appreciable difference between DOF and no DOF in Iray. DOF takes very slightly longer to render.

    Left is with DOF, right is without, both with bump map set to 5 (had to guess a bit).

    Thank you, Will! What's the reflection roughness value here?

  • Mustakettu85Mustakettu85 Posts: 2,933

    Found a thread with a possibly related issue.

    http://www.3delight.com/en/modules/forum/viewtopic.php?t=4369#p22346

    Will see what playing with focus factor may change.

    Oh BTW, in 3DfM there is a "rounded corners" option for nicely subdividing those no-bevel hard surface models. I wonder if it's possible to implement this in DS. Iray seems to have something like that (there is an option in the uber base, but I haven't tested it)...

  • Oso3DOso3D Posts: 15,045

    Oh, sorry... I set Glossy Roughness (corresponds to reflection roughness, I think) to .1

  • Mustakettu85Mustakettu85 Posts: 2,933

    Thanks Will!

    So, after testing it looks like focus factor doesn't change anything. 

    I did some Maya tests and I think (not sure, but it seems so) it is the same in 3DfM. 

    But this difference isn't always noticeable, only when the bump/displacement is quite dense and over a certain amplitude.

  • wowiewowie Posts: 2,029

    Hmm, interesting.

    http://www.3delight.com/en/index.php?page=3DFK_features

    I forgot if the Maya version have the Light Mixer stuff. As far as i can tell, only 3DfK and 3DfMax had it (didn't look at the Softimage version).

  • Mustakettu85Mustakettu85 Posts: 2,933

    3DfM does seem to have it,  though can't say I went as far as to test it. The Maya interface is so difficult for me to learn =( 

  • wowiewowie Posts: 2,029
    edited July 2016

    3DfM does seem to have it,  though can't say I went as far as to test it. The Maya interface is so difficult for me to learn =( 

    Maya's interface isn't the easiest i agree. I think they don't want to drastically change it from fear of alienating older users. It's like a plethora of tools put together on top of each other. I find it easier to grasp/learn by taking in smaller chunks, depending on what you want to learn/accomplish.

    Of course, having a big workspace ia a real plus. I used to run a multi display setup, so going back to a single display sucked. laughNeed to get two more displays so i can work with more than one app. One parameters and viewport navigation, one for asset browsing and shaders, and last for IPR.

    Nice to know I'm not the only one who doesn't like to use HDRIs.

    Post edited by wowie on
  • Mustakettu85Mustakettu85 Posts: 2,933
    edited July 2016

    I agree, it does feel like a hodgepodge of tools. And Hypershade is next to impossible to use meaningfully on a smaller screen. But then it probably wasn't designed with laptops in mind =) 

    There are HDRIs and HDRIs. The Chappie team used that custom dedicated "morphing" HDRI system, and it worked beautifully. I'd say that for highly reflective surfaces that have to be matched to a backplate, HDRIs are probably the best solution. But when you have a fully CG set, it's of course liberating to have finer control in the form of area lights.

    Speaking of area lights... there is this $1 kitchen model on TurboSquid by anim8or64. It's for Max, but there's an OBJ which is salvageable for DS after being put through Bryce. Yeah, fun stuff, but hey, it all ain't bad for the price. 

    It has horrendous surface/group naming in the OBJ, so it's taking me a while to make a usable scene out of it. But here's what I saw:

    I first made a rig of delta point lights for those tiny bulbs, but look at those fireflies. Then I tried small one-poly planes with PT area lights... and no fireflies. 

    Thank you for the instancing tip, BTW =) It's an awesome thing for light rigs like that,

    nofireflies_with_pt_arealights.png
    630 x 128 - 137K
    Post edited by Mustakettu85 on
  • wowiewowie Posts: 2,029
    edited July 2016

    Hypershade needs to be used in a fully maximixed window. laugh But that behaviour is generally true with any node based tool.

    About fireflies or noise in general, there was some discussion about that in odforce and sidefx forums. One of the pros (I think it was a TD from DIgital Domain, forgot the details) pointed out the use of a clamp set to 2 on the final specular/reflection channels (since those tend to be the one with fireflies). He used 2 since 1 - with a 0 to 1 range will cut too much and lose the dynamic range. Newer Luxrender with the bias option also included a radiance clamp, specifically for dealing with fireflies.

    Of course, Mantra has a neat way of shooting additional rays with its variance threshold. You hardly need more than 9 samples if tuned properly, but I think the basic thinking is logical. Preserve enough in a given area of samples when there's a lot of variance, but clamps down when there's very little differences.

    The most effective is probably using object space rather than camera space, which is how traditional denoisers work (not knowing geometry info etc). That's how Altus and the more smarter denoisers work, cross referencing info from the texture, normals, curvature etc. They work nicely with static images, but for animation, the Disney denoiser still rules because they also reference info from different frames.

    Post edited by wowie on
  • Mustakettu85Mustakettu85 Posts: 2,933

     

    The point is, it's a good example to convince users that trace() and lifelike emitter size is a better choice than bsdf() and a delta light.

    Hardcoded clamping is something that might one day bite the user in the rear. If shader builder could generate interface scripts that had dynamic behaviour (like the Iray Uber base interface), I'd put an extra switch in, but as-is, the switch lists are way long, so I don't want it. And I just can't be bothered to make custom frontend scripts for shaders; area lights were a special case because the scripts that shader builder makes for them are plain and simple unusable.

    And yeah, I'd rather the user always chose area lights - most DS users don't animate, they just want portraits, so an area light is the best choice for instantly improved lighting. I wouldn't want anyone fudge around with points and spots and then complain around that "this treacherous Russian's" kit is so hard to learn and hardly gives a better result.

    For anti-firefly filtering, I like this one =D : http://www.daz3d.com/forums/discussion/91071/mcjdespeckle-pcwin-app-remove-fireflies-in-iray-new-daz-studio-frontend/p1

  • wowiewowie Posts: 2,029

    Well, Renderman 21 has those new analytical lights. :) Not strictly area lights and not traditional delta lights. You can still use the light BxDF on any geometry if you want to go that route.

    https://rmanwiki.pixar.com/display/REN/Lighting

    Some Interesting bits from the Renderman 21 changelog:

    Reyes Rendering is Removed

    • RenderMan is now based on modern raytracing techniques to create a simple path to beautiful images.
    • Only the "raytrace" and "bake" hiders are retained in this release.

    RSL is Removed

    The RenderMan Shading Language (RSL) has been deprecated. RenderMan will print a warning if used and return black or artifact.

    Better Defaults for Rendering

    • The default hider is "raytrace", default bxdf is PxrDiffuse, and the default integrator is PxrDefault.
    • Trace displacements is now on by default.
    • The shadingrate default has been increased to improve performance since shading quality is not affected.
    • A default value for darkfalloff improves rendering performance for many scenes, especially those with dark areas that were potentially over-sampled.
  • Mustakettu85Mustakettu85 Posts: 2,933

    Probably a similar idea to those Iray lights that we call "photometrics" in DS - not attached to any explicit geometry, but have shape/size etc. Carrara also has had something like that for years... but since they are so old, and Carrara's renderer is so arcane, I doubt they work upon the same principles. Though, who knows...

    Can't seem to find - what is the shading language now in PRMan, OSL or C only?

    Wonder what the shading rate param is for, if REYES is out. Volumes?

  • wowiewowie Posts: 2,029
    edited July 2016

    Can't seem to find - what is the shading language now in PRMan, OSL or C only?

    Probably OSL. OSL was mentioned in some parts of the changelog. Haven't seen it expressed explicitly, though CGChannel says so.  I think that's where everybody's going.

    Wonder what the shading rate param is for, if REYES is out. Volumes?

    From the docs:

    Micropolygonlength

    Primitives are tessellated into micropolygons. The degree of tessellation is driven by the dice attribute micropolygonlength .

    Attribute "dice" "float micropolygonlength" [1]

    This is the average micropolygon edge length that the tessellation will try to produce. The larger the value, the larger the micropolygons.

    • larger micropolygonlength means coarser tessellation, your object may show faceting or sharp edges but should have lower rendering cost
    • smaller micropolygonlength means finer tessellation, your object should be smoother and better fit your ideal shape with a higher rendering cost

    By default, the micropolygonlength value is expressed in term of pixels on the screen (i.e. the length in pixels, of the micropolygon as projected on the screen). This can be modified by using the dice attribute strategy.

    The micropolygonlength attribute is a successor to the (deprecated) shading rate. While shading rate was expressed in term of area, micropolygonlength is (as the name indicates) a length

    This means that in order to get a similar tessellation level, using a shading rate value of X would translate in using a micropolygonlength value of sqrt(X):

    • shading rate = 0.25, micropolygonlength = 0.5
    • shading rate = 1, micropolygonlength = 1
    • shading rate = 4, micropolygonlength = 2

    This is a useful conversion if you need to match previous tessellation.

     

    I do like the approach - one surface shader for metal/dielectric even glass. Hair and volume gets a different shader. Some pretty neat features in it.

     

    Specular Fresnel Mode

    In Artistic mode, specular fresnel response will be controlled by its Face Color, Edge Color, and Fresnel Exponent.

    In Physical mode, specular fresnel response will be controlled by its Refractive Index, Extinction Coefficient, and Edge Color.

    Extinction Coefficient (Physical Mode)

    Extinction Coefficient is a second refractive index for the material useful for characterizing metallic behaviors. Channel values for this parameter typically lie in the range 1 - 3. Since we support 3 color values to capture the spectral effect presets may be preferred over color pickers. When 0, the material reacts as a dielectric (glass, clearcoat). When non-zero, the material responds as a conductor would. Since this is based on physical values you should the presets more helpful than manual tweaking of settings. Below are presets for Copper, Gold, and Nickel.

    Shading Tangent

    Controls the anisotropy direction. Only valid when it is connected to a pattern. This is useful for making brushed metals. Below are three examples using textures and an Anisotropy of -10

    Very nice. laugh

    Specular Energy Compensation

    Applies fresnel energy compensation to diffuse and subsurface illumination lobes. A value of 1.0 attempts to fully balance those results by darkening them against the specular and rough specular illumination responses.

    Specular and Rough Specular roughness are also taken into account. The effect fades off as specular face or edge color approaches 1.0, so metals can add a diffuse baseline color. Look at Clearcoat Energy Compensation for a visual example.

    Clearcoat Energy Compensation

    Applies fresnel energy compensation to all lobes other than clearcoat itself. A value of 1.0 attempts to fully balance those results by darkening them against the clearcoat illumination response.

    Clearcoat roughness is also taken into account. The effect fades off as clearcoat face or edge color approaches 1.0, so metals can add a diffuse baseline color. Left is 0.0 (default) Right is 1.0. Notice the darkening (changes in energy conservation) that happens.

    I really like the iridescence and fuzz options too. They even went to the trouble of adding double side switch to the fuzz and SSS. Lots of flexibility.

     

    Post edited by wowie on
  • Mustakettu85Mustakettu85 Posts: 2,933

    Thanks! The feature list is pretty much standard these days. The interesting idea that I haven't yet tried is driving anisotropy direction with patterns. Though it may be convoluted to implement in DS.

  • wowiewowie Posts: 2,029
    edited July 2016

    Thanks! The feature list is pretty much standard these days. The interesting idea that I haven't yet tried is driving anisotropy direction with patterns. Though it may be convoluted to implement in DS.

    Two more things that interests me, outside of the shaders. There;s a new integrator that's still experimental - PxrUPBP - it's like bidir VCM for volumetrics.

    https://rmanwiki.pixar.com/display/REN/PxrUPBP

    PxrUPBP improves on the PxrVCM approach by using multiple importance sampling (MIS) to combine different techniques in volume rendering: combining photon points, photon beams and ray paths to give good radiance estimates in both dense and sparse volumes. The name UPBP comes from "Unified Points, Beams, and Paths". This integrator excels at rendering many kinds of participating media including “God rays” and volume caustics. This integrator is an experimental Integrator included with, but not exposed automatically, in RenderMan integrations.

    Second, checkpointing for rendering. Basically, it's like the resume rendering feature of Luxrender. SideFX introduced it into Mantra with Houdini 15 (I think).

    FIltering and sampling seems to be improved too. The adaptive sampling works in a similar way to Mantra. Determine variance in a given area, shoot more rays/get more samples if higher than the threshold value. Seems to be quite effective too, about half the samples with no noticeable difference in quality.

    Something else that I didn't see when I first looked at the lights. There's separate controls for diffuse and specular emission in the lights. smiley Not physically correct of course, but can be useful.

     

    Latest setup. Using a faux area lights made of two delta lights to get more spread. For approximation of physical falloff, I'm splitting each light into two separate ones - one with high intensity but very sharp/short falloff and another with low intensity and shallow/far falloff.  So four in total. With your physical delta lights, I can probably just use two.

    Works quite nicely as an approximation. I can resize the dummy parent node to control the emitter size similar to an area light.

    Post edited by wowie on
  • Mustakettu85Mustakettu85 Posts: 2,933

    ...I'm sure there are some great ideas re:volumes coming with the 3DL OSLtracer on the part of the DnA team, but I also suspect they'd be damn hard to get to work from within DS.

    wowie said:

     

    Latest setup. Using a faux area lights made of two delta lights to get more spread. For approximation of physical falloff, I'm splitting each light into two separate ones - one with high intensity but very sharp/short falloff and another with low intensity and shallow/far falloff.  So four in total. With your physical delta lights, I can probably just use two.

    Works quite nicely as an approximation. I can resize the dummy parent node to control the emitter size similar to an area light.

     

    Clever tricks with the light "hack", and looks great =) BTW there is a question for you here in JCade's thread... http://www.daz3d.com/forums/discussion/comment/1390471/#Comment_1390471

     

  • wowiewowie Posts: 2,029
    edited July 2016

    So, Disney's Hyperion approach isn't integrated into RIS 21. From the writeup, it seems like they only integrated the denoiser.

    https://www.fxguide.com/quicktakes/day-1-at-fmx-2015-rendering-and-20-years-of-weta-digital/

    Found it through this blog:

    http://blog.alexanderkucera.com/page:4

    Found this bit of info interesting:

    There are also effects that can take a long time to calculate with even just one sample. Like a ray that bounces around skin for a while, just to exit somewhere else with less energy. in those cases Disney used a simple “shortcut”: biased filtering.

    Say for Subsurface Scattering, Disney simply assumes that a ray exits a random maximum distance from the entry point with a certain amount of less energy. Which works well for most situations for things like SSS or hair. It has issues with small objects, but those can be worked around.

    I think that's what the Mercenaries guys did with Guerilla.

    http://dl.acm.org/citation.cfm?id=2927451

    If I remember correctly, they're using a falloff to handle that issue in Blender Cycles SSS.

    Now here's something wackily useful

    http://www.vertheim.com/ies-generator.html

     

    Post edited by wowie on
  • Mustakettu85Mustakettu85 Posts: 2,933

    Biased filtering is interesting. There may be several ways of implementing it. Wonder if the DNA devs are also doing it.

    The bit about a photon map used to deal with the sun-based fireflies is great. Last time I checked Arnold-related stuff, the devs were very much against any photon mapping, but it kinda seems a self-limiting option. Then, the only thing that seems to be faster than Arnold is the 3Delight OSLtracer (have you seen that comparison page on the 3DL wiki?), so they kinda can do whatever they want as long as it works =)

    The IES generator is cool, thanks a lot =)

  • wowiewowie Posts: 2,029
     Last time I checked Arnold-related stuff, the devs were very much against any photon mapping, but it kinda seems a self-limiting option. Then, the only thing that seems to be faster than Arnold is the 3Delight OSLtracer (have you seen that comparison page on the 3DL wiki?), so they kinda can do whatever they want as long as it works =)

    Funny you should mention Arnold. Here's Marcus talk at SIGGRAPH.

    https://www.youtube.com/watch?time_continue=1696&v=35morxCJOIQ

    Nothing technical. laughAlthough the better sampling methods looks promising, i find the comparison of CPU vs GPU more interesting. Explains nicely why big studio who can afford 2 or 4-way setups aren't migrating to GPU renderers. And in some things, there's only a negligible difference between the two.

Sign In or Register to comment.