Another HDRI Question

2»

Comments

  • DripDrip Posts: 1,206
    Drip said:

    You cannot make an hdri from any standard image format with a single exposure and have it look right and it will not be HDR.

    To create an actual HDRI requires taking many images and combining them. This articles the technique for doing so with a camera:

    https://blog.hdrihaven.com/how-to-create-high-quality-hdri/

    The Beauty Canvas isn't a standard image format. It's a .exr file, which is capable of the full HDRI range, and actually does so when it's made in Daz Studio. One could actually use a HDRI, render it, and get a new HDRI out of it. It won't be as detailed as the original HDRI, as data will be lost during the rendering process, but, it's technically possible.

    You cannot make an hdri from any standard image format with a single exposure and have it look right and it will not be HDR.

    That is true if you use regular photographs or regular renders (usually png or jpeg format), since the lights in those are severely limited by the color range available in those formats (max 256,256,256 in RGB). So people combine images of the same subject/area taken at different exposure levels, and use software to figure out the proper light emitted by the various lightsources, and save them in a format that goes beyond this 256 per-colour limit. EXR is such a format, and rendering to a beauty canvas takes advantage of that and properly uses that extra colour range. So yes, a beauty canvas render *is* by definition a HDR image.

    The quality loss from making a beauty canvas render of a HDRI is comparable to making a copy of an old casette tape onto another casette tape. The original HDRI gets rendered, Daz studio calculates and recalculates how all the light moves around and where it comes from, and it's guaranteed that it will interpret some tiny pieces different each time you try to do this. Especially if you fiddled around a bit with your camera and render settings. So the beauty canvas will be a processed copy of the original HDRI, not a 1:1 copy like a file-copy would be.

  • Leonides02Leonides02 Posts: 1,379

    You cannot make an hdri from any standard image format with a single exposure and have it look right and it will not be HDR.

    Not true. This figure was rendered in Iray and lit solely by an HDRI that was created from a series of in-game screencaps that were stitched together into a panorama, which was subsequently turned into an HDRI by an AI driven algorithm:

    - Greg

    Hi Greg - Which algorithm?

     

  • kwanniekwannie Posts: 870

    Oh wow, that is interesting. OK, so I can just render a scene and use as a backdrop as long as I keep in a static position. I tried this and the background looks great. and if I just leave the lights in the scene that I made the render with the lights will match when I bring a character in right?

    So, why are HDRi's such a big thing if you have to maintain a static veiw with HDRi maps also? The backdrop looks exactly like what a render with all of scene compontents would look like.

  • SevrinSevrin Posts: 6,310
    kwannie said:

    Oh wow, that is interesting. OK, so I can just render a scene and use as a backdrop as long as I keep in a static position. I tried this and the background looks great. and if I just leave the lights in the scene that I made the render with the lights will match when I bring a character in right?

    So, why are HDRi's such a big thing if you have to maintain a static veiw with HDRi maps also? The backdrop looks exactly like what a render with all of scene compontents would look like.

    You don't have to maintain a static view.  Remember the animation I did?   The camera went all around the singer.  There's also the ground plane.  You don't get one with a backdrop the same way you do with an HDRI.

  • kwanniekwannie Posts: 870

    Sevrin, you referring to the HDRI for moving cameras? OK, so when I tried to move the camera with the HDRI the image on the HDRI got distorted when I moved the camera. I had it set to finite sphere with ground so I cound adjust scaling.

  • You cannot make an hdri from any standard image format with a single exposure and have it look right and it will not be HDR.

    Not true. This figure was rendered in Iray and lit solely by an HDRI that was created from a series of in-game screencaps that were stitched together into a panorama, which was subsequently turned into an HDRI by an AI driven algorithm:

    - Greg

     

    And that is as you state not a single EXPOSURE! What is so hard to understand.

    It is a single exposure - there was no bracketing. The stitching of screencaps was just to get a higher resolution panorama. What don't you understand?

    - Greg

    series of in-game screencaps. Plural

    Yes, the series of 1920x1080 screen caps were used to create a single 8192x4096 panorama, which is still lower resolution than the OP was talking about. After stitching, the image was still standard dynamic range.

    The article you linked does a good job explaining whole process of bracketing multiple exposures in order to to create higher higher dynamic range:

    https://blog.hdrihaven.com/how-to-create-high-quality-hdri/

    Exposure bracket – a set of photos from an identical point of view with increasing or decreasing brightness. When merged together, taking the best-exposed parts of each one, they create a single image with a much higher dynamic range. Our monitors can’t display this higher dynamic range image, they don’t show anything brighter than “white” (RGB=255). Stitching a panorama when we can’t see all the parts of our images is hard, so to make things easier to see we can do some tonemapping.

     

    Like I said, none of the screencaps I took were bracketed. They weren't from the identical point of view and there was no way for me to change the exposure in the game for each anyway. This is why saying "You cannot make an hdri from any standard image format with a single exposure and have it look right and it will not be HDR." is not true.

    - Greg

    There's got to be a word for that exact moment when the other party realizes that theyve encountered an individual with a much more nuanced understanding of the subject at hand, but only after they've doubled down on the same incorrect position a few times. There probably is in German and it might sound like Der Dunning-Kruger Effect or something similar.

  • SevrinSevrin Posts: 6,310
    kwannie said:

    Sevrin, you referring to the HDRI for moving cameras? OK, so when I tried to move the camera with the HDRI the image on the HDRI got distorted when I moved the camera. I had it set to finite sphere with ground so I cound adjust scaling.

    In looking at your images again, I think a lot of the difficulty has to do with you are trying to do this in very restricted space with your characters close to the "wall".   When I did my club scene, the character was in the middle of a large open area, and I never showed her feet touching the ground.  I got the idea from this video by kobamax that you posted.  https://www.nicovideo.jp/watch/sm36231212  Meanwhile, in his other video where you see the feet floating above the floor, it doesn't look right, at all.  https://www.nicovideo.jp/watch/sm36880864  The characters are not in an enclosed space in either scene.

    The HDRI thing is useful in a lot of situations, but you can't use it in every case.   Sometimes you have to use real geometry and textures.

    If anyone's really curious about the scene I did that i refer to, here's a link to a short GIF, and here's a video of a runthrough.  I never got around to actually rendering the complete animation, but the runthrough shows all the camera angles, etc.

     

  • fastbike1fastbike1 Posts: 4,078
    edited August 2020

    I wonder if some of the issues is that one/some of the posters is using the technical definition of HDRI, which does require a large number of exposure stops to get complete lighting information, while other posters are using a more casual "definition" in Studio. The common Studio usage I observe seems to relate more to  a backgroun scene associated with lighting. I also wonder how many people understand that the DR stands for Dynamic Range, rather Defintion, resolution or some such.

    I'm going to have to disagree with @Drip in that color range has nothing to do with a true HDRI. It is exposure range that makes the HDRI, not the resolution of the image or the "number of colors". One can make a decent HDRI from a single photographic image but must make a number of exposure changes within the editing program to do so. This is what the HDR presets do in such programs.

    Post edited by fastbike1 on
  • SevrinSevrin Posts: 6,310
    fastbike1 said:

    I wonder if sone of the issues is that one/some of the posters is using the technical definition of HDRI, which does require a large number of exposure stops to get complete lighting information, while other posters are using a more casual "defintion" in Studio. The common Studio usage I observe seems to relate more to  a backgroun scene associated with lighting. I also wonder how many people understand that the DR stands for Dynamic Range, rather Defintion, resolution or some such.

    I'm going to have to disagree with @Drip in that color range has nothing to do with a true HDRI. It is exposure range that makes the HDRI, not the resolution of the image or the "number of colors". One can make a decent HDRI from a single photographic image but must make a number of exposure changes within the editing program to do so. This is what the HDR presets do in such programs.

    Well, you can't really make a silk purse out of a sow's ear.  You can fake it, though and sell it on ebay!  The full range of light information gets lost in an 8-bit image.   Emulating and combining different exposures gets you part of the way there, but the lighting on your subject will still be flatter and less saturated and it's not the same as starting with actual bracketed exposures that capture actual light.  Sometimes, though, the background is more important than the light you get from it, and you can always add lights.  IBL Master facilitates the use of HDRIs that aren't and that need help in the lighting department.

  • fastbike1fastbike1 Posts: 4,078

    @Sevrin "Emulating and combining different exposures gets you part of the way there, but the lighting on your subject will still be flatter and less saturated and it's not the same as starting with actual bracketed exposures that capture actual light."

    I didn't dispute that, merely pointed out that a single image can make a "decent" HDR image. I've been a photographer for 35 years and am well practice with combining multiple exposures into a single HDR image.  

    I don't think every person doing renders is using a "True Color" monitor either. I would be interested to hear how many bits are required for a "full range of light information". It's been so long that I jusr can't remember whether I used 8 bit film, 16 bit film, 24 bit film or what. I do remember getting images that looked like what I saw when I shot them. <sarcasm off>

  • SevrinSevrin Posts: 6,310
    fastbike1 said:

    @Sevrin "Emulating and combining different exposures gets you part of the way there, but the lighting on your subject will still be flatter and less saturated and it's not the same as starting with actual bracketed exposures that capture actual light."

    I didn't dispute that, merely pointed out that a single image can make a "decent" HDR image. I've been a photographer for 35 years and am well practice with combining multiple exposures into a single HDR image.  

    I don't think every person doing renders is using a "True Color" monitor either. I would be interested to hear how many bits are required for a "full range of light information". It's been so long that I jusr can't remember whether I used 8 bit film, 16 bit film, 24 bit film or what. I do remember getting images that looked like what I saw when I shot them. <sarcasm off>

    Well, I gave an example of what it looks like when you use an 8-bit image vs the same image in 32-bit on the first page.  Even if you make copies of the image and adjust exposure and do an HDR merge, the flatness of the lighting is pretty obvious to anyone.  I tried it and while you get better colour, the light stays pretty flat.  But like I said, sometimes the background image is more important than the lighting, and you can always fake the lighting.

  • kwanniekwannie Posts: 870

    Is there a good place to see some samples of animations that people have made of interior scenes that use only HDRI as the environment? My issue is that most of the things I try to do are of an interior scene. Essentially I try to recreate MMD style videos in DAZ studio. The whole idea is having realistic characters performing music, singing and dancing. Environment and atmoshere thogh  is my nemesis. I can usually get away with animating up to 4 or 5 characters do a decent clean render with less than 20 seconds per frame if I only use the built in backdrop with some color added to it. As soon as I add a wall, or curtains or other geometry my render times jump up to the 4 to 5 minute per frame range. But..........geometry allows for the full range of camera movement. Does anybody know if it possible to do something like a Jazz Club Interior with a stage using Dreamlight"s Movie Maker, so you can save on render time and still get camera movement?

  • SevrinSevrin Posts: 6,310
    kwannie said:

    Is there a good place to see some samples of animations that people have made of interior scenes that use only HDRI as the environment? My issue is that most of the things I try to do are of an interior scene. Essentially I try to recreate MMD style videos in DAZ studio. The whole idea is having realistic characters performing music, singing and dancing. Environment and atmoshere thogh  is my nemesis. I can usually get away with animating up to 4 or 5 characters do a decent clean render with less than 20 seconds per frame if I only use the built in backdrop with some color added to it. As soon as I add a wall, or curtains or other geometry my render times jump up to the 4 to 5 minute per frame range. But..........geometry allows for the full range of camera movement. Does anybody know if it possible to do something like a Jazz Club Interior with a stage using Dreamlight"s Movie Maker, so you can save on render time and still get camera movement?

    As far as I can tell, Movie Maker comes with the street or temple scenes, and if you want streets and temples, that's great.  And they're not indoor scenes.  In movies, they usually use real locations and built sets for tight indoor shots and matte painting, green screens, motion tracking for epic outdoor work.  I'm sure it's partly because it's cheaper, but also because it hard to fake perspective with something a few feet away.  To do something like that you would need multiple HDRIs and a lot of editing between shots.  I don't think that would save a lot of time when using Daz Studio.

  • kwanniekwannie Posts: 870

    So frustrating!!!!!!!!!! I love the ease of DAZ but the limitations are unbearable. If I could get beauty of the DAZ characters with DAZ quality textures  into a real time rendering platform so I could actually place the characters in soulme kind of convincing surroundings............game on!

    BTW is there anything against the rules if I upload a zip here with a sample animation of an MMD song and motion, so people can see why I like DAZ characters.

  • SevrinSevrin Posts: 6,310
    kwannie said:

    So frustrating!!!!!!!!!! I love the ease of DAZ but the limitations are unbearable. If I could get beauty of the DAZ characters with DAZ quality textures  into a real time rendering platform so I could actually place the characters in soulme kind of convincing surroundings............game on!

    BTW is there anything against the rules if I upload a zip here with a sample animation of an MMD song and motion, so people can see why I like DAZ characters.

    As for point one, you can learn animating in Blender and use the bridge to move assets.

    As far as the zip, I don't think there's a problem as long it's not copyrighted stuff you don't have the right to distribute.  If it's a video, it would be better to link it.

  • kwanniekwannie Posts: 870

    Its an animation with a DAZ character that uses converted motion from MMD and the music that goes with it. I think people would be amazed about how smooth the animations and lip sync can be when it is converted correctly.

    Well, the issue I keep hearing about Blender is the skin textures are difficult to work with in the transfer process. I also have Iclone and Carrara which I think can do large environments better than DAZ but again the texture issue.

    I love the animation tools in Iclone, and Popcorn effects. There is also Unity or Unreal............but again, textures.

  • DripDrip Posts: 1,206
    fastbike1 said:

    I wonder if some of the issues is that one/some of the posters is using the technical definition of HDRI, which does require a large number of exposure stops to get complete lighting information, while other posters are using a more casual "definition" in Studio. The common Studio usage I observe seems to relate more to  a backgroun scene associated with lighting. I also wonder how many people understand that the DR stands for Dynamic Range, rather Defintion, resolution or some such.

    I'm going to have to disagree with @Drip in that color range has nothing to do with a true HDRI. It is exposure range that makes the HDRI, not the resolution of the image or the "number of colors". One can make a decent HDRI from a single photographic image but must make a number of exposure changes within the editing program to do so. This is what the HDR presets do in such programs.

    Exposure range is stored within the color range of the fileformat. Where 256,256,256 is white in a regular image, it also displays as white in a HDRI. However, a HDRI can go way beyond that range, depending on the format, it can go to 1024,1024,1024 or even 4096,4096,4096. One could for example boost the red range to 1024 and leave the others at 256, so you'd get 1024,256,256. What your screen displays then will be white.
    So, what's the point if it's all white anyway? Light degrades as it moves away from the source and bounces around objects. After a short while, that 1024,256,256 light will degrade to (for example) 256,64,64. Which just happens to no longer be white, but a shade of red. The original 1024,256,256 color that was set to emisive actually wasn't white, but the red was too intense to see. Only after the light suffered falloff (or, otherwise, if you just darkened the image), or had some of its intensity absorbed by objects, will the high range of red become apparant.

    So, what's up with making images at different exposures then?
    Those images at different exposures are used to calculate how far above 256 the red from our example is stored.
    You make one image at normal exposure, a white light gets stored as 256,256,256. Next, you add an image at low exposure, and for one reason or another, the light is less bright and slightly red. The software to merge those images of different exposures then knows that:
    A. The light is a light, and should have a colorrange beyond 256
    and
    B. The light leans towards the red spectrum, and because of that, the red component will be assigned a higher value than the other 2 colors.

    The more exposure images you give to the software, the better it will be able to pinpoint the exact shape and intensity of the lights, to the point that it will recognize some parts it first thought to be emissive to be non-emissive at all (for example, the bulb from a lightbulb, which doesn't emit light, it's the wire filament inside that's actually emissive), but suffer from overlighting and scatter by the actual lightsource.

     

    Now back to the beauty canvas: the beautycanvas *is* one of these high range fileformats, and yes, it will properly store that 1024,256,256 light as 1024,256,256. It doesn't need these low exposure shots to figure that out. Daz Studio already knows how much light and at what intensity it is emitting, and the beauty canvas is storing exactly that. And unlike the regular render output, values above 256 do not get cut off. This is why the various canvasses are so important for artists, to do their post-processing in photoshop, because there is so much more information within those canvasses than what meets the eye. As a side-effect, this allows those who require HDRIs that are impossible to create in real life (simply because the environments may not exist in real life, like alien worlds, or because visiting such an environment might not be possible because it's private property) to create high quality HDRI's for such environments.

    For HDRI's from real life, yes, you will need multiple shots with different exposures, with a good modern HDR camera, you'll need less seperate shots (3 is usually enough for the top cameras, and they're often made to create all three shots at once too) than you'd need using an older digital camera (5 or even 7 shots used to be recommended, and they couldn't take multiple exposure shots at once).
    Daz Studio already knows all the exact details for what is within the shot, Daz has the entire topography of the objects, the emitters and their intensity, materials and their absorbtion. And the results of all that can be stored on the beauty canvas, if only you enable the option. Alternatively, one can still use and combine the other canvasses, to get the special high-detail effects often associated with HDR images. But that moves into a slightly different use of HDR images: to emphasize details that the naked eye would normally not see in such a way that they become visible. This use of HDR is not what defines HDR, it's just an application of it. But it does generate pretty pictures, which people have come to associate with HDR.

  • kwanniekwannie Posts: 870
    edited August 2020

    deleted

    zip
    zip
    Onegai Test 4-1.zip
    5M
    Post edited by kwannie on
Sign In or Register to comment.