Albedo shader for Iray (Denoising aid)

Matt_CastleMatt_Castle Posts: 2,573
edited October 2020 in Freebies

After a lot of experimentation on how to get albedo data out of DS to assist denoising AIs, this is the latest version of my Iray Albedo shaders. (But it's quite possible you'll find other uses for them beyond just denoising). As of V0.4, it can also output world space normals.

The underlying shaders have been built from scratch, but are built to take the same input parameters as Iray Uber; this means they can be applied to existing surfaces and correctly retain parameters like texture instance tiling and diffuse overlays (also, to make it easier to fix if a user accidentally saves these shaders into their main session file). However, they might require some manual intervention where other shaders are concerned.

I've also packaged in a Render Settings preset that should provide the fastest/best results. As it can be optimised to only needing a path length of 1, and doesn't have to worry about noisy lighting, it should be able to render a decent albedo pass in only a moment or three. (Which, with how much it can improve the performance of AI denoisers, can save a huge amount of time).

Please read the included Read Me, as it details some common issues and workarounds for them.

Feedback is appreciated. Similarly, if you find any bugs that aren't documented in the Read Me, please let me know.

Download link: Iray Albedo Shader V0.4 (Released 15th October 2020)

Old Versions:
Iray Albedo Shader V0.3 (4th October 2020.


Planned improvements:

- Create alternative shaders that will accept the parameters from the Blended Dual Lobe and 4 Layer shaders. I may consider trying to support other shaders as well.


Albedos and Denoising:

As it might not be immediately apparent, I feel it's worth demonstrating the value of additional canvas data for Denoising AIs.

As an example, here are some full-size crops from my Rainfall render:

Original render (which took about 8.5 hours on a GTX 1050 Ti, and is still pretty noisy)
Intel OI Denoised render (no canvas information)
Intel OI Denoised render with canvas information

It's hopefully apparent how much canvas information massively improves the AI's ability to discern between noise and detail (particularly when it comes to things like freckles!). As a result, despite the time needed to separately render the Albedo pass, it still made for a massive saving on render time, without any compromises on the sharpness of the image.

Of course, for those people who have very fast systems, it probably remains more practical to just render for more samples, but this may be useful for others (like me) who are reliant on less powerful hardware. (Or perhaps for those attempting very ambitious scenes that are taking forever to converge).

Post edited by Matt_Castle on

Comments

  • mCasualmCasual Posts: 4,607
    edited October 2020

    my free frontend for Intel Denoise

    https://sites.google.com/site/mcasualsdazscripts9/mcjdenoise

    i think i read someone saying that DS4 can handle pfm format but my system uses imageMagick and it works fine for me

    i didnt test the albedo feature very much but i think it worked with the Intel supplied samples

    before that i made a PC-exe program that despeckled by replacing pixels that were sharply different from surrounding pixels ... with the average of those pixels

    https://sites.google.com/site/mcasualsdazscripts7/mcjdespeckle

     

     

    denoyz.jpg
    1920 x 1180 - 278K
    Post edited by mCasual on
  • mCasualmCasual Posts: 4,607
    edited October 2020

    1 - albedo render with your shader on every surface and your render settings ( but the horns were not light gray in the real subsequent render )

    2 - 60 passes render with default iray settings

    3 - denoised using Intel Open Denoise and my mcjDenoise script + use of the albedo image

    4 - denoised using Intel Open Denoise and my mcjDenoise script without use of the albedo image

    5 - the mcjDenoise settings ( note i had to rename the albedo image file to a very specific name )

    tahiti41_60Passes_alb.png
    1080 x 1080 - 2M
    tahiti41_60Passes_DespeckWithoutAlbedo.jpg
    1080 x 1080 - 195K
    tahiti41_60Passes_out.png
    1080 x 1080 - 1M
    tahiti41_60Passes_DespeckWithoutAlbedo.jpg
    1080 x 1080 - 195K
    despecme.jpg
    1080 x 782 - 154K
    tahiti41_60Passes.png
    1080 x 1080 - 3M
    Post edited by mCasual on
  • mCasualmCasual Posts: 4,607
    edited October 2020

    comparison

    if you look the upper lip bright light was eaten up by the denoiser without use of the albedo

    ( note this image is 1280 but the forum scales it to 1080 but in the attachments it's unshrunk )

    comparison.png
    1280 x 480 - 868K
    Post edited by mCasual on
  • Matt_CastleMatt_Castle Posts: 2,573
    mCasual said:

    1 - albedo render with your shader on every surface and your render settings ( but the horns were not light gray in the real subsequent render )

    Yes, some surfaces can come out quite differently if they rely on a large component from things like translucency (although I have offered an albedo shader that tries to mix that in), gloss or other things that aren't the base colour. However, as far as I know, the denoisers mostly use the albedo and normal as a reference for whether there is a sharp transition in shade/colour/normal in the image, and thus allowing more details to be retained at that point. The exact colours don't seem to be too important (as ultimately, it has to assume that any surface could be lit in non-white light).

    ~~~~~

    On which note, obviously, best results also benefit from the denoiser being given normal data as well, but in my experience, the quality of Iray's normal canvases is questionable - my interpretation is that Iray only seems to calculate the first sample when it's asked to provide a normal canvas, and no more. I've currently been working around that by rendering the canvas as large as my system can handle, and then scaling down, but this has its limits as far as quality.

    I'm now wondering if it would be possible to create a specialised shader to output that data.

    While simply getting it to use the normal maps as if they were base colour maps would be trivial, and possibly useful, true normal data should also include the surface geometry... hmm. It sounds like it should be possible, if I can work out either if there is (or how to build) a shader block that can be used to extract surface vectors. At that point, I think it should then be possible to convert those into colour maps and feed that into the albedo shader to export it as a normal map.

    That could be an interesting experiment.

     

  • Matt_CastleMatt_Castle Posts: 2,573

    On that note, initial experimentation is promising. I've been able to extract geometry normal data and feed that to the albedo shader in order to get shader that procedurally renders the normals of the surfaces. It's currently world space normals, but it may be that the AIs will actually accept that. (Haven't tested that one yet).

    I'll need to work out have to work on merging that with texture normal data, but it seems like there's potential in the idea, which should hopefully allow much higher quality denoising. More on that one as I have time to work on it (I'll be gamesmastering an RPG this evening so I'll be busy for the next few hours).

  • Thanks for you work so far, looking forward to you solving the normal problem. Will increase the usefullness of the denoiser, especially for animation.

    Cheers,

    Daniel

  • Matt_CastleMatt_Castle Posts: 2,573

    I'm working on this, although I'm still working on the issue that Iray internally stores its normals in world space, and I've not yet figured out a way to translate that to the camera's perspective within the shader itself. I've been able to convert it afterwards by processing the image with information about the difference between camera and world space, but that's an extra layer of inconvenience on top of layers of inconvenience.

    The jury is still out on whether denoisers work well with world space normals, but at the very least, I do have a draft version of the shader that now has a "Normal vector Mode". It probably needs a little more refinement, but I'll try to make it available soon.

  • Eustace ScrubbEustace Scrubb Posts: 2,698
    edited October 2020

    I'm working on this, although I'm still working on the issue that Iray internally stores its normals in world space, and I've not yet figured out a way to translate that to the camera's perspective within the shader itself. I've been able to convert it afterwards by processing the image with information about the difference between camera and world space, but that's an extra layer of inconvenience on top of layers of inconvenience.

    The jury is still out on whether denoisers work well with world space normals, but at the very least, I do have a draft version of the shader that now has a "Normal vector Mode". It probably needs a little more refinement, but I'll try to make it available soon.

    I used normals to set up tri-axial refraction planes (pleiochlorism) in my gemstone shader (QSabot's Sinbad's Magic Gems at 'Rosity:  I go by QSabot there) by feeding the "normal map" channel into an algorithm to determine its angle by its color:  blue-yellow axis is forward (Z-axial), red-cyan axis is horizontal (X-axis), and green-magenta is vertical (Y-axial).  But based on my limited knowledge of MDL and the limits of Shader Mixer, it was all camera-dependent.

    The way to do it is to multiply your Normal colors by Red, Blue and Green.  I'll post a shader-tree if that will help.

    Post edited by Eustace Scrubb on
  • Eustace ScrubbEustace Scrubb Posts: 2,698
    edited October 2020

    Here you are:  the green lines coming from the left are Float Value 1:  I can't recall why this multiplication is necessary but it is.  The pale RGB lines coming down mid-algorithm are the colors to map to each axis.  The output lines (middle-blue, exiting screen right) are plugged into the Refraction Color and Top Coat color in the gemstone shader.

    But this is how you implement the camera-relative Normal values into your shader.

     

    Using Normals to Set Color.jpg
    1758 x 699 - 92K
    Post edited by Eustace Scrubb on
  • Matt_CastleMatt_Castle Posts: 2,573

    I've looked at pulling the data straight from a normal map instance as there, but it doesn't seem to account for normals generated by bump maps or displacement maps. Also, it seems it's still provided in world space, which is apparently the standard for Iray.

    As is, I've attached is the current version of the blocks for my procedural generation of normals. They come as a float 3 out of the Normal block; are split into axes by accessors, have 1 added (as normal vectors are usually in the range -1 to 1, but I need them to always be positive - you can't have a negative colour), multiply by 0.5 to compress that new 0 to 2 range into a 0 to 1 range.

    It's then squared, which I'm not certain is correct, but it seems to use the colour space better (although that's quite possibly a matter of monitor gamma), and recombined using a Colour inputs block.

    It seems to have much of the same intent as your shader blocks, but it uses accessors rather than multiplication to get the axes, and as it's always using pure R, G and B as its shades, it uses a colour input block rather than needing a stack of blending blocks.

    I can try adding in the multiplication by a float of one as an intermediate stage, but I can't see how that can change anything. (My guess is that it's a factor of the way you've split the colour channels, and the multiplication is just being used to effectively translate the data type from Colour to Float so that it'll be accepted as a Blend weight).

    Normals.png
    1748 x 806 - 309K
  • Eustace ScrubbEustace Scrubb Posts: 2,698
    edited October 2020

    duplicate.

    Post edited by Eustace Scrubb on
  • I've looked at pulling the data straight from a normal map instance as there, but it doesn't seem to account for normals generated by bump maps or displacement maps. Also, it seems it's still provided in world space, which is apparently the standard for Iray.

    As is, I've attached is the current version of the blocks for my procedural generation of normals. They come as a float 3 out of the Normal block; are split into axes by accessors, have 1 added (as normal vectors are usually in the range -1 to 1, but I need them to always be positive - you can't have a negative colour), multiply by 0.5 to compress that new 0 to 2 range into a 0 to 1 range.

    It's then squared, which I'm not certain is correct, but it seems to use the colour space better (although that's quite possibly a matter of monitor gamma), and recombined using a Colour inputs block.

    It seems to have much of the same intent as your shader blocks, but it uses accessors rather than multiplication to get the axes, and as it's always using pure R, G and B as its shades, it uses a colour input block rather than needing a stack of blending blocks.

    I can try adding in the multiplication by a float of one as an intermediate stage, but I can't see how that can change anything. (My guess is that it's a factor of the way you've split the colour channels, and the multiplication is just being used to effectively translate the data type from Colour to Float so that it'll be accepted as a Blend weight).

    I'd have to check, but I believe the "negative color" issue is why I multiplied it by Float-1 in mine:  I had to account not only for positive RGB values, but for negative (CMY) on the same terms.  As the sample is from a gemstone shader, I could easily have wound up with my high-R-high-G surfaces registering as on either of those axies or both, rather than as what they were, which was anti-B (that is, Y) and properly accorded to the Z axis.

    Alternately, would taking an absolute value of the normal, rather than adding 1, be an option?

     

  • Mine also allowed for user input of the X-, Y-, and Z-axis refraction colors, with Z (blue) as the primary looking straight at the camera.  Which is why I needed the axial normals as float values:  they tell how much of each color to refract in or leave out.

  • Matt_CastleMatt_Castle Posts: 2,573
    Alternately, would taking an absolute value of the normal, rather than adding 1, be an option?

    I don't want to contradict the conventional standards for normal mapping (where each vector has a unique colour), as that may cause compatibility issues, and I can't see that an ABS function will make the shader drastically more efficient than doing it as (X+1)*0.5

    ~~~~~

    As is, I can't work out how your shader is working in perspective space normals. When I fed a normal map instance block into my shader tree (which should functionally be decomposing the input in much the same as yours - multiplying the G and B channels by 0 is broadly the same as taking the X-component of a Float3), the output was still in world space. Orbiting the camera around the model in Iray preview did not result in the colours shifting with the camera position, but remaining statically assigned to surfaces.

    However, I'll concede I may be misunderstanding what blocks you're using or what inputs you're feeding them.

  • As is, I can't work out how your shader is working in perspective space normals. When I fed a normal map instance block into my shader tree (which should functionally be decomposing the input in much the same as yours - multiplying the G and B channels by 0 is broadly the same as taking the X-component of a Float3), the output was still in world space. Orbiting the camera around the model in Iray preview did not result in the colours shifting with the camera position, but remaining statically assigned to surfaces.

    However, I'll concede I may be misunderstanding what blocks you're using or what inputs you're feeding them.

    Funny thing is that that, or something more nearly like that, was my own intent:  optimally, the normal-based effect would be locked to the object's XYZ axies (pleiochlorism being an axially-locked property of the crystal), but worldspace can at least pretend to that in ways that cameraspace cannot.  Shall we swap algorithms and give it a test?

     

    laugh

  • Matt_CastleMatt_Castle Posts: 2,573

    Update Time!

    Iray Albedo V0.4 is now available. This is a single merged shader that has mode switches that replace the functionality of the (formerly separate) Albedo Translucency shader, as well as allow it to procedurally generate and provide World Space normals.

     

    optimally, the normal-based effect would be locked to the object's XYZ axies (pleiochlorism being an axially-locked property of the crystal), but worldspace can at least pretend to that in ways that cameraspace cannot.  Shall we swap algorithms and give it a test?

    I'd be entirely happy to swap. The latest version is linked above, and if you need me to clarify details, feel free to ask.

    As far as what you're looking for, you can actually go the whole way - Iray does allow you to convert between world space and object space normals, so you could actually lock it to the crystal rather than the world; it was one of the things I could find out how to do, but the various enum values for the different co-ordinate spaces didn't include the option for translating into camera or perspective space, and object space normals was actually a step back for me (unless I could find a way to make that object the camera).

  • My Iray normal-calculations algorithm is above:  the resultant color mix was fed into the Refraction Color channel.

    I stripped it down to just the Normal-based nodes, and my algorithm definitely favors the current camera.

  • Matt_CastleMatt_Castle Posts: 2,573
    edited October 2020

    Then can you confirm that the block on the left is a Functions/Geometric/Texture Instance Normal Map block? When I try feeding the output of such a block into my decomposer, the outputs I'm getting are definitely world space, and are not affected by camera position.

    As far as converting between vector spaces, I can confirm you want to plug the input into a Transform Vector (or possibly Transform Normal - not certain of the difference) block with Enum Value blocks as Coordinate Space inputs (they should populate with values when plugged in), which will allow you to take it from World Space to Object Space. So you should be able to get axially locked properties.

    Post edited by Matt_Castle on
  • Eustace ScrubbEustace Scrubb Posts: 2,698
    edited October 2020

    I could tell you better if ShaderMixer had something exotic like, y'know, a brick-library search function!

    surprise

    That does seem to be the nearest version, though I assembled the shader in DS 4.9 I think, so some of the brick names may have changed.

    Post edited by Eustace Scrubb on
  • Eustace ScrubbEustace Scrubb Posts: 2,698
    edited October 2020

    Then can you confirm that the block on the left is a Functions/Geometric/Texture Instance Normal Map block? When I try feeding the output of such a block into my decomposer, the outputs I'm getting are definitely world space, and are not affected by camera position.

    As far as converting between vector spaces, I can confirm you want to plug the input into a Transform Vector (or possibly Transform Normal - not certain of the difference) block with Enum Value blocks as Coordinate Space inputs (they should populate with values when plugged in), which will allow you to take it from World Space to Object Space. So you should be able to get axially locked properties.

    Okay, I've been working on the problem and comparing our two solutions, and so far I'm getting interesting results:  crack open the attached shader (with handy icon) and look under the hood.  The two If-switches alternate between your Float-X method and my Multiply-Colors method for isolating the axies (this makes a difference), and between the Texture Instance Normal Map input I used and the Geometry Map input from your own (this apparently does not).  I don't think I left anything out.

    On the other hand, I did discover that my current build (4.2.12.6 Public Build) refuses to render through the floor plane from below, with or without a geometric floor.  But it might also be my Iray settings.

    duf
    duf
    Normal Test Lab.duf
    19K
    Normal Test Lab.duf.png
    256 x 256 - 48K
    Post edited by Eustace Scrubb on
  • @Matt_Castle, I've found the brick we both wanted, I think:  MDL > Default Modules > state Transform Normal.

    The only hitch I've found is that you have to plug the From and To variables into your User Interface block to use them at all, AFAICT.

  • Matt_CastleMatt_Castle Posts: 2,573

    I mentioned that above, but while it will accept world and object space as inputs (which can be provided from an Enum Value block), perspective space is not an option, unfortunately. It solves your problem, but not mine.

  • Hmn.  (I don't recall you're mentioning that block, but not going to say you didn't-- I'll check up-thread).  The color-multiplication method I used (it isolates RGB channels by multiplying by purified-channel colors that zero-out the other two), is very consistent in holding to Perspective Space linkage unless the Transform Normals block is established upstream.  I'm not sure yet how else to assemble a Normal tree that does set up a perspective/camera calculation of it, then.

Sign In or Register to comment.