Rendering Normals in Studio - 3Delight vs. Iray

EDIT: The nature of what I'm trying to accomplish has evolved since original posting, so I thought it best to rename the topic to something a little more indicative of what I'm doing.

Short of it: I'm trying to create sprites with normal maps, and have been using the following method to do so up until now:

I've been sticking with 3Delight, largely because I understand lighting better, and renders are a lot quicker.  I thought about switching to Iray and investing in a new graphics card, but only if I could reproduce similar output with the renderer.  Very brief research and some Youtube tutorials give me the impression that I don't want to use a pitch-black background and a single directional light for the renders, because lowly-lit renders prolong render time.  But is there any way in Iray, to just create a "light-only" render of an object, like those Amphibioid X's in the above example?  Given that Iray materials seem to have exponentially more fields in the shading pane, I wouldn't know where to begin.

Post edited by chris the stranger on

Comments

  • That might well be possible using Canvasses, in the Advanced tab of the Render Settings pane, but I'm not sure which combinations you'd want.

  • Knowing nothing about how canvases work...I noticed there's an option for Canvas Type to be "Normal." That could be what I need to address another question I had, which would eliminate the need to render five separate times with different directional lights.  That'd be amazing - but I don't want to get my hopes up just yet.  I'll be doing some reading up, or videoing up on Canvases.

  • JD_MortalJD_Mortal Posts: 760
    edited November 2019

    There is one for "depth" or "distance", too. Those should produce a grey-scale image which you can use to create the normals you want... However, it will not be a 256-grey bitmap. It whould be a floating-point or ??? long-int 32/64 format, within the EXR file. (I honestly don't remember which format the EXR data format uses for that output canvas.)

    In any event, the produced image format will have to be "converted" into grey-scale using a min/max setting for the upper and lower-bound limits of the 0-255 RGB values, which is the standard format for greyscale as a 256-pallet image. (That would be done in photoshop or another art program.) Which, you will then use to create your "normal map", from the grey-scale image. Unless you can find a way to skip the middle-man or the "normal" canvas output works for you.

    https://www.daz3d.com/forums/discussion/109146/create-a-depthmap

    Post edited by JD_Mortal on
  • Don't have Photoshop at the moment, so I've had to jump through some extra hoops to seeif these are working.  I've had the most luck with the Normal Canvas, which does indeed deliver what it promises, albeit in a different coloring convention than what I'm familiar with.  I can't quite figure out how to work around this, at least, this early in the morning.

    Specular and Diffuse have given me less consistent results.  In a few cases, I've seen the Diffuse canvas produce a solid white image of the object, but I've seen it come out differently based on what I'm rendering.  It's probably a weird thing happening within the converter, that I have no control over at the moment.  But, I also used a plane in the background of the scene for Test 3, that wasn't around in Test 2.  Anyway, I'll show what I'm trying to achieve for Diffuse, and Normal in the attachments.  I'll do a little more testing, but in the mean time, I have a few questions:

    • I know that the Node List is supposed to tell Iray what to render, and what not to.  If I don't include a Node List, that seems to render the entire scne. In some cases, I may want to render the object and a background plane, and in others, just the object -- so I'd create two node lists. One with just the object, and one with the object + the plane.  Is that a correct assertion?
    • What does changing the Active Canvas actually do?  At first I noticed that it affected my output renders (PNG file), but I don't think it's actually producing a render of whatever said canvas is set to be--or if it is, it's doing so under some parameters within the renderer that are balanced for higher color-depth than the PNG format, leading to the "dim" results.
      • What I'm more interested in, though, is if there's a way I could skip over the EXR part entirely, through changing the Active Canvas.
    • Are the Diffuse and/or Specular canvases dependent on the scene lighting?  I was able to confirm that the normal canvas gave me the same result when testing with two separate directional lights (which is what I want), but since I can't seem to get the converted Diffuse and Specular EXR files into a state that makes sense to me, I'm not sure on this one.
    • Is there a way to render normal, diffuse and specularity via 3Delight?  Putting all of this together, I remember why I opted to 3DL in the first place, but what I wouldn't give for a way to skip over the manual normal map construction phase for every frame of animation.

     

    Amphibioid X Normal V Normal.png
    640 x 512 - 83K
    Amphibioid X Diffusel V Diffuse.png
    1024 x 512 - 143K
  • JD_MortalJD_Mortal Posts: 760
    edited November 2019

    NOTE... You can render your animation as still-frames. (Single images)

    Just make a unique pose at every key-frame and tell it to render frames 1-6 at 1-frame-per-second.

    Poof, instant product output, no matter what the lighting is. (Put all your lighting into "groups", and just hide the unwanted groups on the first frame.) That is also a great way to capture a generic base-shadow too... for the animation.

    No way to skip the EXR part. That is the standard, professional, format/container, used to hold the extended data that you just can't get with PNG or TIFF or JPG. GIMP is a free art program that can read the EXR files data, to create the "layers" you need to represent them. It can also create "normals" from grey-scale height-maps, or depth-maps.

    https://www.gimp.org/

    P.S. Gimp can now do 32-bit and 64-bit floating-point value editing, so EXR data will be no problem to use. 64-bit floating-point values are limited to specific functions and filters, but that shouldn't be a concern to you.

    Post edited by JD_Mortal on
  • So after a lot of pressing buttons and observing what happened, I've run into a few issues.

    1. Iray tends not to render the colors out to how I would expect them to appear in a scene.  For instance, I used a solid white cube for calibrating cameras, but instead of the expected "255 255 255" value of the cube, it was quite a bit more dim (and noisy).  I may have already addressed this by setting the Environment Mode to "Scene Only"
    2. Stemming from this, all of my renders are a lot darker. In fact, using the AoA Advanced Ambient Light no longer works.  I get a solid black object over an opaque background, instead of the desired flat, fully lit color render.
    3. I was hoping that the Diffuse canvas would accomplish exactly what I mentioned above...basically creating a Diffuse map of "fully lit object with no shading from light sources."  Turns out, I think this one is dependent on the lighting as well, because I'd get something different if I changed the angle of a distant light.  Finally, the Diffuse canvas just looks incredibly bright and overexposed, like in the second image of previous post.
    4. My depth canvas is just a solid white image. Oof.

    Curious if anyone can offer insight onto any of these roadblocks.

    Tangentially, I know that the LineRender9000 Fresnel Reflection v camera creates something like a normals render, albeit greyscaled.  Don't know if I could use that as a makeshift depth map or not, though I'm more inclined to think not.  It may very well just be applying a greyscale to a render similar to the Normal canvas.

  • So after a lot of pressing buttons and observing what happened, I've run into a few issues.

    1. Iray tends not to render the colors out to how I would expect them to appear in a scene.  For instance, I used a solid white cube for calibrating cameras, but instead of the expected "255 255 255" value of the cube, it was quite a bit more dim (and noisy).  I may have already addressed this by setting the Environment Mode to "Scene Only"

    Pure white (and pure black) are generally to be avoided. Noise means there isn't enough light falling on the surface to covnerge it in a reasoable time/number of samples.

    1. Stemming from this, all of my renders are a lot darker. In fact, using the AoA Advanced Ambient Light no longer works.  I get a solid black object over an opaque background, instead of the desired flat, fully lit color render.

    The AoA Advanced lights are all for 3Delight, not Iray.

    1. I was hoping that the Diffuse canvas would accomplish exactly what I mentioned above...basically creating a Diffuse map of "fully lit object with no shading from light sources."  Turns out, I think this one is dependent on the lighting as well, because I'd get something different if I changed the angle of a distant light.  Finally, the Diffuse canvas just looks incredibly bright and overexposed, like in the second image of previous post.
    2. My depth canvas is just a solid white image. Oof.

    Did you try adjusting it - if using Photoshop there's a specific command for HDR tone mapping

    Curious if anyone can offer insight onto any of these roadblocks.

    Tangentially, I know that the LineRender9000 Fresnel Reflection v camera creates something like a normals render, albeit greyscaled.  Don't know if I could use that as a makeshift depth map or not, though I'm more inclined to think not.  It may very well just be applying a greyscale to a render similar to the Normal canvas.

    Linerender9000 is also 3delight, using the Scripted renderer.

  • chris the strangerchris the stranger Posts: 132
    edited November 2019

    Hey Richard. I think I've seen your name in enough of my posts that I have to say, I really appreciate all the help you've given.  I'm fairly certain my usage of Daz Studio is esoteric, so I'm glad to see a few people who can chime in and help with some of the more niche problems I run into.  Now, onto the talking points:

    AoA is for 3Delight, not Iray - I was afraid of that.  Well, I'm treating this topic as a sort of tug-of-war between 3Delight and Iray, so that'd be a point for 3DL.  What I'd like to know specifically, is if there's a way to produce full ambient lighting in Iray (Preferably, not involving purchasing another product), so that I can get a diffuse color-only render of the model.    Think of it like a difuse map for a model, except for a 2D image.

    Diffuse Canvas - I don't know if there's a way to adjust in GIMP, but I think the fact it changes based on lighting position means this isn't the canvas I'm looking for.

    Depth Canvas - It may be the canvas I'm looking for...but I would have to test this out in Photoshop, which I don't currently have.  Maybe there's a trial version I could look into?  Based on what I read in another post, I thought this might have something to do with render settings rather than post-image work.

    LineRender9000 is for Scripted 3Delight - Well, I already knew this one.  One of the things I'm trying to fish for, that I really should've addressed more specificially, is an answer to the question: Is there a way to do normal renders in 3Delight?  Because, if LR9K has a camera that creates a greyscale rendition of normal data, then it seems there should be an intermediate stage where said info is retrieved.  But looking purely at the support scripts, I haven't found it.

    The ultimate goal of the topic is to determine which renderer and method to use in my pipeline.  It's an ongoing internal debate I've been having over the course of the year, and next year is when I shift focus from graphics to programming.  3Delight + LR9K is the most intuitive method as of now, but has its share of shortcomings.  The biggest problem, being, I have to go into every object I render and replace materials with custom shaders that have their own lighting properties, when assembling a normal map--same goes for diffuse only and specular only renders.  But if there were a way to skip over the five separate lighting preset renders, that would save a tremendous amount of time - enough to justify switching rendering engines, if need be.  But if I could do it using 3Delight, that might be the best course.

    Post edited by chris the stranger on
  • By a Diffuse render you mean one that is flat colour? I think in PBR that would be albedo, though i'm not certain - and I don't know if Iray can do it.

    As for normal renders - you want a normal map showing the direction, in world space, that the surfaces are facing? In principle you could use Shader Mixer - take the output normal from any normal/bump/displacement brick and feed it, possibly after trasnformation to a different space, into Colour input of the Surface Root shader. My attempts seemed to give a white out if I tried transforming the normal, I'm not sure if the results of just straight feeding it into the out put were right or not:

    Normal render attempt, 3Delight.jpg
    1711 x 893 - 245K
  • chris the strangerchris the stranger Posts: 132
    edited December 2019

    It's been awhile since I've stared puzzlingly at the post.  It's my fault, for not learning how to build/read shaders up until now (And I still plan on doing that). But after a lot of trial and consideration, I think it's best that I stick with Scripted 3Delight for the project.  For simplicity.  I saw a lot of noise in my Iray test renders, and there seem to be more Render Settings parameters that deal with said noise in different ways than I can wrap my head around.  My scene-only lighting always seemed to be dim, even when cranking up the intensity of the light source--could also be attributed to something that needs adjusted in Render Settings, but it's hard to keep track of all of that.

    And I'd still be doing a lot of material/shader swapping, which was one of the things I was hoping I could avoid by switching to Iray.

    I had a crazy thought just now--the kind that seems so absurdly simple and yet plausible, that you just have to try it anyway, despite decades of grim reality crushing your childlike enthusiastic spirit.  I may not know how to write a shader, but I do know that at the end of the day, the direction of a pixel-sized normal is encoded into RGB values, and "blue faces you."  ...be rught back!

    edit: Turns out I can't recreate the lighting conditions necessary to make a scene that's color-coordinated to match the normal maps, though I learned a lot more about why the RGB values are what they are.  Maybe it's time to look into shaders again.

    Post edited by chris the stranger on
  • Was reqading up on Sketchy, when I discovered it came with an Iray environment that does solid lighting in all directions, giving the flat color render I'd been looking for in Iray.  My question transforms:

    • If Normal Canvas is still a viable option, how might I go about doing a meaningful color correction to the Normal Canvas output, in order to get something along the lines of the above Amphibioid example? (All colors are relative to R: 127 G: 127, B: 255)
    • In the event that I can't use the Normal Canvas: How might I go about creating directional lighting using the environment, instead of scene lighting?  I know it functions like a dome, but I don't know where to put my black and whites so that I can isolate a direction.  Or actually, I probably just need to make one, and then rotate it in the settings for the desired direction.
  • djigneodjigneo Posts: 283
    edited March 2020

    Hi, author of LineRender9000 here.

    I think what you want is some Shader mixer cameras that isolate the variables you're looking for. It's true that the Fresnel reflected v camera will give a "normals"-esque output, but it converts it to greyscale so the outliner doesn't get confused by the multiple colors touching each other. In your case you could probably load up the Fresnel reflected v camera in the shader mixer and remove that greyscale conversion.

    In any case, I would say shader mixer cameras in conjunction with LR9k AutoRender's multiple pass functionality is probably what you need to isolate the different "variables" in the output image. The drawback from what I'm gleaming from this chain is that you don't have shader cameras that output what you want. I know it's possible to ouput specular and diffuse passes in isolation using shader cameras... something I've been expanding upon in my own work, although I haven't developed shader cameras for those explicit purposes yet.

    The reason why you're not finding Fresnel reflected v indicated in any of the supporting scripts is because that shading is done from a Shader camera. (If you load Fresnel reflected v camera, change that to be the active camera and render with the LineRender9000 script you'll see it's just a "regular" render using a shader dynamically applied to the whole scene.) The point of using shader cameras in LineRender9000 was so that there was no need to replace all the surfaces/materials in the scene to get different outputs.

    I hope this is helpful.

    Post edited by djigneo on
Sign In or Register to comment.