Any way to decrease maximum distance for Depth render in Iray?

Any way to decrease maximum distance for Depth render in Iray? It is so pointless to render to an exr when the actual information you get is stretched out between the camera and some point several miles in the distance. I tried scaling the entire scene hugely to see if there was a hard limit I could approach but it seems to be endlessly increasing the max distance also. This is set up very badly right now.

Comments

  • j cadej cade Posts: 2,310

    The depth pass is set up properly *for what it is.* the luminance value is the distance from the camera (in either cm or m I forget which). There isn't a Max value the range is 1 to n→∞ interpolated linearly (this is important and useful for proper compositing). So any distance above 1 (pretty much everything) will look white, and, in a software that doesn't know that to do with it, that'll be the end of it.

    Basically the depth pass shouldn't really be thought of as an image. It's really meant to be used by a proper compositing software. That said, I think in most proper compositing softwares (blender certainly) you can plug it into a normalize node and that'll turn it into a 0-1 range.

     

  • agent unawaresagent unawares Posts: 3,513
    edited February 2018
    j cade said:

    The depth pass is set up properly *for what it is.* the luminance value is the distance from the camera (in either cm or m I forget which).

    This implies I actually need to scale the scene smaller to get the dynamic range I need. So I'll try that.

    I know how to use exrs, and I still need the scene to render properly contained between full black and full white as I need an extremely high dynamic range depth image to mess with some NPR postwork. What Iray outputs for normal scene size is way too banded in the important parts.

    This pretty well supports the idea someone posted earlier that DS is somehow talking about scale to Iray wrong.

    EDIT: This completely fixed the problem, thanks.

    Post edited by agent unawares on
  • How are you compositing it? My workflow is to open the .exr in Photoshop and use exposure (with the white and black point tools) to get something close to a range I want, then I put a levels layer on and ajust it so black and white points are at the minimum and maximum used values (e.g., where the graph has something in it). Other than it not being antialiased (which creates problems for hairs and other fine details) and no way to deal with transparency, it's a pretty effective technique.

  • I take it into Photoshop and do hdr toning with Equalize Histogram. Only for depth renders, obviously.

    Of course this does nothing to help with parts of the scene not being rendered inside the values range I wanted. But scaling the scene down fixed that.

    It turns out the result is still too banded for what I wanted anyway. Ultimately the results using it as a base were no better than my custom 3DL shader which actually handles transparency properly. Que sera sera.

Sign In or Register to comment.