Iray Canvases, Light Path Expressions and Z-Buffer

EsemwyEsemwy Posts: 578
edited July 2015 in Daz Studio Discussion

I know it's in there, but I can't quite get there. I've tried everything I can think of to get a Z-Buffer out of Iray.

Turning on Canvases / Depth, with or without a node list gives me a blank white screen (it renders fast though :-) ). I've also tried Distance instead of Depth (not sure what the difference is in this context), but it does the same.

Has anybody out there gotten a depth mask out of Iray? Along the same line, has anybody done any general experimentation with LPEs?

Any help will be greatly appreciated.

Edit: Tried rendering direct to file as well and turning off tone mapping. No change.

Post edited by Esemwy on

Comments

  • Richard HaseltineRichard Haseltine Posts: 102,449
    edited December 1969

    I'm seeing the same - and normals come out black. Even saving as .exr and adjusting the levels doesn't seem to show any detail. Please report this, unless anyone else can point out what we are doing wrong.

  • EsemwyEsemwy Posts: 578
    edited December 1969

    Thanks for the confirmation, Richard. I'll give others some time to speak up, then I'll put in a ticket if all else fails.

  • EsemwyEsemwy Posts: 578

    And the secret is... HDR Toning...

    There's depth detail, but all the values are above 1.0.

    Direct from the DAZ3D Help Center:

    The render currently looks white beacuase all the values are above 1 in HDR range.

    You will have to tone map the image in a post process.

    Note: We have pre-toning the image on the “to do” list but it’s lower priority as the following process works.

    Here is my Distance.exr when I load it into photoshop: image006.jpg

    Go to Image > Adjustments and apply an HDR toning: image007.jpg

    If you have an alpha channel you might see this (the alpha goes white, everything else goes black): image008.jpg

    Change the method to equalize histogram and the closest point in the image will go black, the furthest will go white. image009.jpg

    The result isn’t anti-aliased so you might want to put a slight blur to soften it up… depends on your use case… unnamed.png

    We currently only have canvases implemented but are looking at bringing the LPE workflow to the render dialog.

     

    We started implementing some options in the render dialog on the left, you will see a small handle to open the panel.

    This allows you to take a look at the canvases while they render. image010.jpg

    As for depth vs distance, one is calculated linearly from the view plane and the other is calculated as an arc around the camera so if you were in the center of a cylindrical room in one you would see a vertical white line in the middle that fades to black form side to side and in the other you would see all black as the cylinder is equal distance from the camera on all sides…


    You can find some info on canvases here: 
    http://www.migenius.com/doc/realityserver/latest/resources/general/iray/manual/index.html#/concept/canvas_names_in_render_targets.html#1

    We are still working on our own docs on this.

     

  • algovincianalgovincian Posts: 2,636

    Isn't equalizing the histogram going to distort the depth information?

    - Greg

  • EsemwyEsemwy Posts: 578

    Yep.

    Local Adaptation:

    Equalized:

    But since we don't have control of maximum distance in the Z-Buffer (presumably part of the "to do"), adjusting the histogram is necessary to get decent control in post-processing.

  • Hi Esemwy,

    how do you export the image as EXR? From the DS menu I only get the default low-range formats (tif, jpg, bmp and png). Perhaps I'm missing something obvious

    Thanks in advance for any reply

  • EsemwyEsemwy Posts: 578
    edited August 2015

    Rendering direct to file is the only way to get the EXR. DAZ will create a directory by the same name as your TIF (png, jpg, whatever) with "_canvases" appended. In that directory will be files names "<base>.<canvas>.exr".

    Example:

    Render with Canvas1.LPE direct to file: "Sphere.tif"

    You get: "Sphere.tif" and "Sphere_canvases/Sphere.Canvas1.LPE.exr"

    The names may be differ on Windows, but that should get you there.

    Post edited by Esemwy on
  • Saving from the render window will also create the _canvasses folder, you don't have to render to file.

  • EsemwyEsemwy Posts: 578

    Confirmed. Guess I just never checked once I read that advice elsewhere.

  • so, yeah.. I"m waaaaay late to the party on all this stuff with iRay, but have a ton of experience with it Maya and other places that I'm trying to translate across. My dumb question of the day is: where would I find that "canvases" folder and where is that .exr save at becuase I'll I'm seeing are png, jpg, bmp and tif?

  • ToborTobor Posts: 2,300
    edited September 2015

    The Canvas feature is in the Advanced tab under Render.

    When you render out with one or more canvases, save as usual (png, tif, jpg, it doesn't matter), and a folder with the EXR canvas files for that pass is also created in the same directory. 

    Post edited by Tobor on
  • EsemwyEsemwy Posts: 578

    Relighting with Iray Canvases (shameless self promotion) has a fair bit of general information on the canvases pane as well. 

  • almahiedraalmahiedra Posts: 1,353
    Esemwy said:

    Relighting with Iray Canvases (shameless self promotion) has a fair bit of general information on the canvases pane as well. 

    Thanks Esemwy, bookmaked.

  • Figured I'd pop back into this thread since it was helpful previously... thanks, btw... :)

    I am wondering if anyone has a solution to getting the results of a depth pass, in iray, to figure in transparency? For instance, if you're rendering a scene with a bunch of transmapped foliage, your depth pass will be based on the geometry and not what would actually render, so you end up with grey for that geometry instead of the shape of what is actually visible.. if that makes sense. The same issue exists in Poser, interestingly enough, but the answer to it there, for me anyway, has long been to use Semidieu's Advanced Render Settings (formerly at RDNA... wow, how times have changed...). That set of rendering scripts take a different approach to creating the Zdepth channel and are the best I've seen outside of high end software. Dude really outdid himself with that package... but I digress... is there something along those lines for iRay? Or perhaps something that would switch off rendering from iRay to a 3Delight based solution just for that pass? If there is, I want it. And if there isn't, I want to work with someone to help make it happen. :) -Les

  • EsemwyEsemwy Posts: 578

    I've had a ticket in on Iray for five months. I just pinged them again. Anyway, to your question, there's a depth camera in Age of Armour's Atmospheric Effects Cameras for 3Delight, but I haven't gone there yet. I saw one forum post (which now I can not find) that at least implied that it was broken as well. You can select faces and make them invisible in the geometry editor, and that will work with the depth canvas, but that's kind of a blunt instrument. If you happen to have or purchase the above mentioned camera, let me know. I may try to post a question on the Nvidia forum about it and see of there's a more general solution.

  • seeker273seeker273 Posts: 449

    Any news on this one? The depth mask is unusable without tranparency. Also, it would be good to be able to set the starting and end point in the scene.

    It's things like this that I miss from poser, someone always working for a solution. I wish Semidieu would jump over and help out :(

  • algovincianalgovincian Posts: 2,636
    seeker273 said:

    Any news on this one? The depth mask is unusable without tranparency. Also, it would be good to be able to set the starting and end point in the scene.

    It's things like this that I miss from poser, someone always working for a solution. I wish Semidieu would jump over and help out :(

    Use 3DL. I wrote shaders/scripts that use surface based shaders (that support transparency). What is considered near/far is controlled by the placement of 2 nodes within the scene (a near node and a far node).

    - Greg

  • EsemwyEsemwy Posts: 578
    seeker273 said:

    Any news on this one? The depth mask is unusable without tranparency. Also, it would be good to be able to set the starting and end point in the scene.

    It's things like this that I miss from poser, someone always working for a solution. I wish Semidieu would jump over and help out :(

    Use 3DL. I wrote shaders/scripts that use surface based shaders (that support transparency). What is considered near/far is controlled by the placement of 2 nodes within the scene (a near node and a far node).

    - Greg

    Could you go into a bit more detail on this? I haven't done much with 3Delight shaders, but having a functional depth camera would be nice.

  • algovincianalgovincian Posts: 2,636
    Esemwy said:
    seeker273 said:

    Any news on this one? The depth mask is unusable without tranparency. Also, it would be good to be able to set the starting and end point in the scene.

    It's things like this that I miss from poser, someone always working for a solution. I wish Semidieu would jump over and help out :(

    Use 3DL. I wrote shaders/scripts that use surface based shaders (that support transparency). What is considered near/far is controlled by the placement of 2 nodes within the scene (a near node and a far node).

    - Greg

    Could you go into a bit more detail on this? I haven't done much with 3Delight shaders, but having a functional depth camera would be nice.

    In my solution, there are no special cameras invloved - it's all surface based. Basically, a script grabs all the parameters for every material on every object and stores them in arrays. The script also grabs the location of the near/far nodes and camera in world space.

    Then, a new shader is applied to each object with the color for the shader set according to the location in world space of the point being shaded as compared to the known locations of the near/far points and camera. Other parameters like transparency and displacement are applied according to the stored values from the initial materials (yes, the conversion can be an issue for non-standard shaders).

    The script then renders the scene, and finally restores the inital materials (and continues rendering out other render passes).

    Now that I'm thinking about it, this may not be a good solution given the fact that transparency and displacement are handled differently in Iray. I'm just now starting to get into MDL a bit, so admittedly I'm no expert!

    Hope this helps.

    - Greg

     

  • seeker273seeker273 Posts: 449
    Esemwy said:
    seeker273 said:

    Any news on this one? The depth mask is unusable without tranparency. Also, it would be good to be able to set the starting and end point in the scene.

    It's things like this that I miss from poser, someone always working for a solution. I wish Semidieu would jump over and help out :(

    Use 3DL. I wrote shaders/scripts that use surface based shaders (that support transparency). What is considered near/far is controlled by the placement of 2 nodes within the scene (a near node and a far node).

    - Greg

    Could you go into a bit more detail on this? I haven't done much with 3Delight shaders, but having a functional depth camera would be nice.

    In my solution, there are no special cameras invloved - it's all surface based. Basically, a script grabs all the parameters for every material on every object and stores them in arrays. The script also grabs the location of the near/far nodes and camera in world space.

    Then, a new shader is applied to each object with the color for the shader set according to the location in world space of the point being shaded as compared to the known locations of the near/far points and camera. Other parameters like transparency and displacement are applied according to the stored values from the initial materials (yes, the conversion can be an issue for non-standard shaders).

    The script then renders the scene, and finally restores the inital materials (and continues rendering out other render passes).

    Now that I'm thinking about it, this may not be a good solution given the fact that transparency and displacement are handled differently in Iray. I'm just now starting to get into MDL a bit, so admittedly I'm no expert!

    Hope this helps.

    - Greg

     

    That would be great! Can it do only the depth pass or would it have to do the whole render as well?

    It's no trouble popping over to 3DL for a depth map. Any chance you could share that script? You'd be a life saver!

  • algovincianalgovincian Posts: 2,636
    seeker273 said:
    Esemwy said:
    seeker273 said:

    Any news on this one? The depth mask is unusable without tranparency. Also, it would be good to be able to set the starting and end point in the scene.

    It's things like this that I miss from poser, someone always working for a solution. I wish Semidieu would jump over and help out :(

    Use 3DL. I wrote shaders/scripts that use surface based shaders (that support transparency). What is considered near/far is controlled by the placement of 2 nodes within the scene (a near node and a far node).

    - Greg

    Could you go into a bit more detail on this? I haven't done much with 3Delight shaders, but having a functional depth camera would be nice.

    In my solution, there are no special cameras invloved - it's all surface based. Basically, a script grabs all the parameters for every material on every object and stores them in arrays. The script also grabs the location of the near/far nodes and camera in world space.

    Then, a new shader is applied to each object with the color for the shader set according to the location in world space of the point being shaded as compared to the known locations of the near/far points and camera. Other parameters like transparency and displacement are applied according to the stored values from the initial materials (yes, the conversion can be an issue for non-standard shaders).

    The script then renders the scene, and finally restores the inital materials (and continues rendering out other render passes).

    Now that I'm thinking about it, this may not be a good solution given the fact that transparency and displacement are handled differently in Iray. I'm just now starting to get into MDL a bit, so admittedly I'm no expert!

    Hope this helps.

    - Greg

     

    That would be great! Can it do only the depth pass or would it have to do the whole render as well?

    It's no trouble popping over to 3DL for a depth map. Any chance you could share that script? You'd be a life saver!

    I had originally planned on making this a product here at DAZ (along with many other analysis passes as part of my NPR work). Not long after the time I started developing, DAZ announced Iray. To be honest, it's not worth the effort to get it into a form for public consumption as developing for 3DL in DAZ is a dead end.

    I do believe there are already shaders for 3DL in the store that will crank out depth maps that support transparency? Perhaps somebody else that knows for sure could chime in. Keep in mind that the issues with displacement not being handled the same will exist.

    - Greg

  • ToborTobor Posts: 2,300

    I think we've been looking at these passes as Photoshop friendly layers, and that may not always be the intended use-case. Not everything is made to go through Photoshop, or at least, not without additional filtration.

    A .NET or other script could read the depth map, looking at the pixel values that denote distance from camera. Tone mapping isn't necessary (nor is an alpha channel, since you'd be applying the depth information to only visible objects from a color pass); these are raw 32-bit values being calculated. I suppose if you wanted to keep it solely within Photoshop, you could write a custom filter for it. The thing would be dog slow I imagine for any image of notable size.

Sign In or Register to comment.