Rendering .HDR Files for Daz Studio

I asked Mr. Brinnen his thoughts on this, he thought it a good idea to post/ask here...

"hey david, hope all is well...

i wanted to know if you have any thoughts, opinions, advice or strategies with regards to rendering .hdr files from bryce specifically for use in the daz studio iray render engine as backdrops... i can render images out as background/backdrops, but i mean using them as .hdr files for the same purpose and affect (much like dimension theorys photographic daz studio backdrop/hdr's).

i have a hard enuff time getting them to render to disk without bryce crashing, so im wondering if maybe its in my settings or workflow that i am having this problem.

but then after that, is there a special way to set the scene up so that it will render correctly (or as intended) in daz studio?

any thoughts or info you could throw my way would be awesome, using bryce to generate .hdr's for this purpose would make my day, thanks"!

... of course anyone can jump in, but I think this technique would be useful for MANY...

«1

Comments

  • HDR shop is no longer free. That was the easiest way in my opinion to generate your own hdr files. I think there is another application, Picturenaut. IT isnt as useful as HDR shop used to be...and still is if you have the funds to pay for it ( not too expensive).

    To generate an hdri you need multiple versions of the scene with different levels of lighting. Ideally you would render the scene out 5 to 10 times at very different light levels. The first shots should be quite dim with each version getting brighter until the final image which would appear totally burned out. These represent the increased dynamic range from more than 1 f-stop. When the application compiles the images you will get an HDRI. It probably won;t be a ture hdri, because dynamic levels outside of those possible to produce on a screen cannot be introduced. But it will be closer than a jpg image, so it's probably still worth it.

    If rendering the same sene 10 times seems like torture you can always use post work effects to lighten and darken an image to create the various f-stops.

  • there is a HUGE difference from simple background images used as hdr's as opposed to actual hdr's.

    the images shown use dimension theorys set (an awesome grab) used in daz studio iray (havent tried them in 3d delight) at 0, 90, 180 and 270 degree rotations.

    no lights or any other scene objects except the figure and the underwear set.

    hdr-bd-0.jpg
    1220 x 591 - 519K
    hdr-bd-90.jpg
    1220 x 591 - 536K
    hdr-bd-270.jpg
    945 x 591 - 427K
    hdr-bd-270.jpg
    945 x 591 - 427K
  • I took a few stabs at it, going thru various settings, and so far here is what i came up with, but could use tweaking...

    I set the document to a qtvr panorama, and changed the render settings to spatial optimization high (have no idea what itll do but it sounded good, lol) and 360 panorama render... render size 6000x1850, but bryce will crash while rendering =( the picture shown is the rendered .bmp file, NOT an .hdr file.

    first few tries i didnt set the 360 panorama render option, which made a dramatic difference... turning off fog and haze help the wash out at the horizon, but the method needs tweaking...

    HELP!!!

    hdr-bd-bryce2.jpg
    1220 x 591 - 453K
  • HoroHoro Posts: 10,642

    MatCreator - I have two HDR Sets made from photographs in the store that were made for Bryce 7, Carrara 8, Studio 4.8 3DL and Studio 4.9 Iray. All four programs interpret the same HDRI differently and have different capabilities and shortcomings. I already talked with Dimension Theory before Iray and bought some of his earlier sets, I talked to Dumor3D and have two of his sets, and scrutinised two TerraDome 3 sets by Colm Jackson (the initial first set was by Muze, redone by Colm).

    You need a spherical panorama, 360 deg wide and 180deg high. Bryce cannot render such a panorama directly, David and I have the Spherical Mapper here in the Daz store, a lens that can be put in front of the Bryce camera and you get a spherical panorama. With Bryce you can render up to 4000 px wide, which is nice but somehow limited if you want the HDRI visible as background. Forget rendering to disk. 4000 px are over 360 deg, what remains are about 11 px per deg. With a normal 50 mm lens, there are about 40 deg horizontal, which are 440 px on the panorama. Now figure how wide your final render can be. However, size only matters for the backdrop. As far as the light generated from it is concerned, a small size will nicely do, perhaps 1000 x 500 or such.

    With Bryce 7 also come 4 tutorials how a panorama can be rendered. All have, of course, the 4000 px limitation. You can render the 6 faces of a cube at 4000 x 4000 px and stitch them with Pano2VR (google 'ggnome') which is moderately priced, the more expensive Pro version can do HDRI, but you can always stitch the individually lit panoramas as normal LDRI and use the free Picturenaut 3.2 (google 'hdrlabs') to merge them to an HDRI. The other option is to render with a camera setting that equals a 12.14 mm rectangular lens on an FX camera and render 8 shots. They can be assembled with the moderately priced PTGui (google 'ptgui'), the Pro version can stitch HDRI panoramas. You can also use the free Hugin (google 'hugin') to stitch the panorama. The cube and rectangular lens methods are described in detail on my website: Bryce & 3D CG Documents > Mine (PDF) > Basics > Huge Panoramas.

    Faking a true HDRI in Bryce is simple as far as the theory goes, but can be a bit tricky to get it right. Most important: make sure Gamma Correction and 48-bit Dithering are disabled. All Bryce lights are linear, which is a prerequisite. This means that if you double Diffuse, double as much light gets into the scene, it works like halving the exposition time or opening the aperture by one f-stop. Export the render as 48 bit TIFF and you can render in 2 f-stop steps, saving half of the renders. Depending on how well you can adjust the lights (don't forget haze, sky colour and such), with 4 to 5 renders per shot you will get an acceptable result. However, be prepared to need some cheating. Theory not always matches the real world.

    If you are going to use the HDRI panorama only for Studio Iray, you can take shortcuts to take advantage of how the HDRI option is implemented. Studio has no dedicated tone-mapping option included (neither has Carrara) to make the HDRI backdrop appear decent. It is done by gamma, but if gamma is used, also the light from the HDRI is subjected to gamma and is not linear anymore. I've seen HDRI made for Studio Iray that are essentially LDRI saved as HDRI and a extremely bright sun copied into the panorama at a size that is 10 times bigger than the real sun (5 deg instead of 0.5 deg). This is clever, because the backdrop is "tone-mapped", the ambient light it produces nice, and the sun copied into this image provides ample key light.

    Instead of adding an immensely bright big sun into the HDRI, you can also leave it out completely and use a radial in the scene to provide the key light. In such a scenario, you would render the parts for the panorama in Bryce and export them as 96 bit Radiance *.hdr file, which Studio can read. Since the size does not matter much for the light, a small HDRI panorama can be rendered and additionally a big LDRI. The HDRI is used in Iray but not shown as backdrop. A sphere around the scene can be created that takes up the high resolution LDRI on the inside (may have to be mirrored). This sphere must be transparent to light so the HDRI light can shine through. In fact, Studio 3DL Uberlight2 works similar but uses a diffuse (or specular) convolved HDRI for the light.

  • Wow... that is a lot to take in. Will have to marinate on all of this.

    I have absolutely no background or idea of real world photography, which is crucial to all of this, then figure in the individual dynamic of each program.......

    My brain just exploded =/

    Will return to this after researching now that I've been pointed in the right direction, thank you kindly, very generous with your time and wisdom =)

  • HoroHoro Posts: 10,642

    You're more than welcome. When a Bryce 6.0 beta came out for testing in 2005, there was that new HDRI tab in the Sky Lab (now called IBL) and I had to ask what HDRI stands for. Each day you learn something new makes it worth you got up in the morning.

     

  • played a bit more, let me show what i came up with so far... (still way off with regards to being usable, but shows enuff to try to make it work, if at all possible)...

    i was having trouble with a washing out of the ground texture, just coming in way too bright. didnt seem like that in the render from bryce, but oh well.

    you see the "pinch" in the center of the ground unless you switch to infinite or finite (no ground option). i didnt show a render of that.

    bryce-panorama-1-rot0.jpg
    945 x 591 - 198K
    bryce-panorama-1-rot90.jpg
    945 x 591 - 221K
    bryce-panorama-1-rot180.jpg
    945 x 591 - 242K
    bryce-panorama-1-rot270.jpg
    945 x 591 - 211K
  • to adjust the ground wash out, i adjusted the material, 0 diffusion, 0 ambience... i choose a darker texture to "see"......

    at 4000x1230, not getting much detail out the image/hdri. also, the sun doesnt seem to be doing anything?!?

    panorama-01-rot-0.jpg
    945 x 591 - 289K
    panorama-01-rot-90.jpg
    945 x 591 - 304K
    panorama-01-rot-180.jpg
    945 x 591 - 308K
    panorama-01-rot-270.jpg
    945 x 591 - 303K
  • turning down the sun diffuse in the skylab either way was too much, at 50% the ground wash seemed "removed", noticable when compared to the render in bryce vs the hdr rendered in studio.

    also, the light doesnt seem to give a shadow, or direction. its more like a dim flood light =/

    i was able to position the sun what i thought was "exactly where it should be", but not seeing any visual of its presence...

    so so far, doesnt seem to be working at all, lol

    enjoy the superbowl!

    3rdhdri-rot-0.jpg
    945 x 591 - 313K
    3rdhdri-rot-90.jpg
    945 x 591 - 325K
    3rdhdri-rot-180.jpg
    945 x 591 - 327K
    3rdhdri-rot-270.jpg
    945 x 591 - 324K
  • HoroHoro Posts: 10,642

    A pano render with the sun in Bryce and saving as HDR doesn't make it an HDRI. Bryce renders 48 bit internally, which gives a better light resolution or finer steps when saving as 48 or 96 bit than with only 24 bit. The maximum level is 1.000 and it may be that clouds have the same brightness. For a true HDRI, you need several renders with different light settings but keeping the visible sun (not Diffusion) at the same level (that's cheating, but works). When merging the renders to an HDRI, the sun has always the same brightness but the environment gets darker. This makes the sun brighter than it is. Getting the sun right for Iray needs a really bright and rather big sun. Are you sure the sun is visible in your pano at all?

    You have two different challenges: (1) creating a panorama and (2) creating an HDRI. Try to master each one separately before you put them together. The next challenge is to make it work in Iray, but that will be much less of a problem for you than it would be for me.

  • While I in no way could elaborate on the technicalities, I can say that "it didn't work as expected", lol

    I made sure the sun was visible, but that is moot.

    I'm curious tho, if you can't render to disk .hdr files, and exporting renders as .hdr doesn't create true .hdr files, why have it as an option at all?

    Can the .hdr files be used for anything at all?

    Also, the spherical panormal render seems "useful", is there any way to use, apply or incorporate those into my 3d imagery, Bryce, studio or carrara?

  • HoroHoro Posts: 10,642

    The sun is a radial as far as the light cast into the scene is concerned. Diffuse changes how much light is generated. The visible sun, on the other hand, is just "an image" and the maximum brightness 1.000. To make the sun in the image brighter than 1.000, several renders with the same brightness of the visible sun but with different Diffuse settings must be merged to an HDRI. These renders must not be saved as hdr. The brightness of the sun in the HDR image can thus be boosted much higher than 1.000: 1,000, 1,000,000 or whatever. We call it image based light (IBL) because the light is contained in the image as very high pixel values.

    An hdr has all the image information of the render: 48 bit blown up to 96, which is better than 24 bit. Although this is not a true HDRI, it can be opened in an HDRI capable graphics program and any tone-mapping operator applied to it as post to change the character of the image or equalize it better than just gamma.

    A panoramic render can be mapped on a sphere (mapping mode spherical in Bryce) and with the camera in this sphere gives you the complete environment. Reflecting objects in the scene will show the complete environment. I'm quite sure Carrara and Studio can also map a panorama on a sphere.

    A true HDRI panorama (also a faked one from several renders) can be used as HDRI in Bryce, Carrara and Studio (3DL and Iray). Even if it doesn't provide enough key light, the high contrast creates much stronger and more realistic reflections and specular on the objects in the scene.

  • Wait, lol....... let me explain better...

    Bryce "can" generate an .hdr file. But you said the sun isn't a true light, and does not behave as a true .hdr, so what purpose does it serve for Bryce to generate an hdr that doesn't "do" anything? Surely there must be some reason or use for them, no?!?

    But I am very interested in learning how to render/produce true working .hdr files from Bryce that can be used specifically in iray and used as backdrops that render with the character/scene... that is the point of this thread. More research and time =P

  • HoroHoro Posts: 10,642

    Oh, the HDRI does something even if the sun (or key light) isn't bright enough in the image: it gives nice omnidirectional ambient light. The "Diffuse" control is called "HDRI Effect". Let me confuse you a bit more.

    Let's forget the colors for a moment. In a normal image, white has a pixel value of 255, black 0, there are 256 different brightness levels. HDR images also have the black pixel value at zero (or nearly so, e.g. 0.000001) but the white pixel value can be very high (e.g. 1,000,000,000,000). Looking at "BreakingCumulusSky.hdr" from a TerraDom3 set, you find the darkest pixel value at 0.00992341 and the brightest at 18,117.91733809 which adds up to a dynamic range of 1,341,310:1 or an EV (exposure value) span of 20.3.

    A Bryce render with a visible white sun, black sky and clouds saved as 48 bit TIFF (16 bit if we ignore the colors), the max value is 1.000 and there are 65,536 brightness steps between black and white. If saved as 96 bit (32 bit if we ignore color) TIFF we get a pixel value range from 0.000 to 1.000, exported as HDR 0.000 to 0.995. Calculating the dynamic range doesn't make sense because any value divided by 0 is infinite. There is no more light in the saved image than would be in a conventionally saved one (bmp, jpg), but the light levels are in finer steps - in fact one step in a conventional 8 bit image is divided into 256 steps. The brightness resolution is 256 times better.

    To bring the colors back into play: what is true for black and white above is true for each of the three colors red, green and blue.

    However cool it is that Bryce renders internally with 16/48 bit and save the result as hdr, there is still no really bright sun in it. Neither is the sun really bright in a photograph. Alo here you have to take several shots with different camera settings and than merging them to an HDRI that has a really brighr sun that can also serve as a key light in a rendered scene if used for image based light

  • Rashad CarterRashad Carter Posts: 1,803
    edited February 2017

    These are great questions, MatCreator. I'd like to offer more thoughts if that's okay. We long timers often forget that not everyone already knows the stuff we're discussing, the meanings of the terminiology. So I'll be more basic in my explanation so that anyone at any level can follow.

    Are you familiar with music at all? Analogies are often useful for me. I like to think of dynamic range like a piano keyboard. A standard .bmp, .jpg, .png, images are only concerened with the brightness and color levels a monitor can display and that a printer can apply to paper. This is like a single octave on a piano keyboard. From middle C to the next C for example. Any tones above C2 will be taken down an octave. Any tones below middle C would be taken up an octave. Compression is the word I'm looking for. Images on monitors do the same thing, compression. But this is in no way representative of the total dynamics of light available in the real world, just like one musical octave in no way represents the total number of tones a piano can produce. No one has ever lost their vision from looking directly at the sun in a .bmp image. Musically, it is as if the entire musical piece is played within a single octave on the piano. While the listener will get a feel for the piece, it will not be nearly the same as if the piece been played ranging multiple octaves from bass to treble as it was originally written.

    And that's where .hdri comes in. Basically, just because a pixel renders as white on the screen doesn't mean it has reached infinite brightness in real terms, it just means the monitor cannot display any values higher than this so it caps them beyond a certain value. The same is true for color, as Horo described. The actual degree of possible "blueness" is way beyond B 255 R0 G0. Its probably closer to B2000000000000000000000000000 R0 G0.

    LDRI= Low Dynamic Range Image; values cap between 0 and 255. No decimal color values are allowed, thus is Fully quantized

    HDRI= High Dynamic Range: values extend from 0 to infinity. Decimal increments are allowed. Still quantized to some degree, but much much less. Almost as accurate as analog.

    The purpose of IBL (Image Based Lighting) was to recreate the lighting captured from the real world and to then port that light information into a CG render as best as possible. Knowing some amount of information will always be lost, how to minimize that. Someone HAD to figure out a way to take photographs of the real world and retain most if not all of the light and color information, instead of having to dump any values above 255. They realized that the only way to do it was to capture the scene at multiple exposures, and then to compile the exposures into a single new image format called .HDRI

    The Issue: The real world works as analog, with unlimited color variation and brightness potential. HDRI is nearly analog, so it can capture a more accurate sampling of real world color and light values. But then we have the CG world which operates on LDRI quantized values between 0 and 255. How do we change this mismatch and why would we? What benefit do we gain by reaching outside the 0-255 range anyhow if the monitor cannot display values any higher than that? I'll explain.

    With more steps between the lightest and darkest, and the greatest and lowest saturation, many more subtle colors can be achieved than would not be possible with a lower number of steps. It is a bit like moving from a quantized system to a continous one. Even though our montors and printers are limited to 16 million colors (255x255x255), we rarely use all of them. HDRI helps us squeeze more out of the colors our monior and printers are already willing to produce.

    Building an HDRI with a Sun

    From LDR to HDR back to LDR for display; What's going on? This gets more to what Horo was talking about. Cameras and render engines both are natively in LDR format, meaning you cannot achieve brightness greater than rgb255 in a single exposure. When people are creating hdri from renders or from photos the limits are the same, the total brightness the sun spot can ever achieve is RGB255. But in real life the value is thousands of times higher. When applied to a scene, an LDR sunspot intensity of only rgb 255 when compared and combined with the other bright colors in the "wanna be" hdri, the sunspot will not appear to be that many steps brighter than the other colors. It will blend into the other colors of the image as ambient light, instead of sticking out as a prominent key light source. Remember the Sun is very small in the sky, it occupies a very small amount of real estate on the hdri image. It must kick serious light out of those few pixels. An LDR sunspot won't be bright enough with such a small sunspot, so people often cheat and they increase the Sunspot size by 10x, making it 10x brighter, almost bright enough to appear like the real sun would have, but the cost is that shadows are now ruined by being far too soft. But if the sun where hdri in nature, then though it might appear as plain white on a monitor but it would in fact be much brighter than that within the CG environment. While the viewer cannot see the sun for its true brightness, the models in the scene being affected by the HDRI DO see it for its true brightness, mkaing them look like they would have appeard in teh real world, to some degree at least. That's why we use HDRI, for the effect it has on the models in the scene.

    Importance of the Sun to Hdri: The sun is the primary driving force of any outdoor lighting. If it isnt, then there is a major problem. But the magic of the realism tends to come from the shaded regions, and most often, we don't get enough light onto those regions. More on that in a moment. In real life the Sun has a certain size in the sky and that is the reason why it tends to give somewhat hard shadows at high noon. The softness of any shadow is goverened primarily by the width of the light source itself. If a light source is too large, it will produce shadows that are too soft. On an unconscious level, we know from experience what the sun and its shadows are supposed to look like. When a sun that is too wide is used it tends to minitaturize the entire render, because sunlight starts looking like a nearby lamp instead of a fireball 93 million miles away. Lamps give soft shadows most of the time. This is what Horo was talking about. We want a true Hdri, not a mere LDRI used in place of a true HDRI. A true HDRI will feature a very small sunspot and yet still have enough brightness to outshine all the other pixels in the HDRI combined.

    Multiple Exposures: In the real world Horo takes a series of real photos of the sky with the visibly real sun with its real brightness. It takes several exposures to get this right. As mentioned above; The camera we use is most likely also limited to rgb255 for the sunspot. So how can we build the hdri so that the sun seems brighter? We take multiple exposures at different F-stops, both above and below the default one. What you will find with the shorter exposures is that while the environment gets darker and darker, the sunspot by comparison remains the same high intensity. The darkest frame will likely be an all black image with nothing but a sunspot. When you compile all of these exposures into an .hdri, the sunspot even though it is small will still produce enough light to appear as the primary light source just as it did in the real world situation. What you will also find is that for the higher f-stops, where the sky is blown out, that the regions that used to be in shade are now easily visible. When the final hdri is produced, you will find that the shaded regions of the hdri are more easily viewed than those regions were in the original LDR starting frame.

    LDR Pixel Depth: Not all LDR images are the same. With higher bit rates, it means that more input is used to derive a color. While a pixel might appear as "red" on both a .bmp and  .tiff; the .tiff will have a richer desription of that redness. When it comes to compiling an hdri from LDR inputs, you want to start with the richest color information you can get.

    Bryce 7 HDRI Output: Thanks to Horo, Bryce 7 has the ability to export finished images in an HDRI format. Problem is, there's really no HDR information in the scene, as is evidenced by the fact that simply exporting a sky from Bryce doesn't produce the bright Sunspot needed. You still need to compile multiple exposures of your render, whether by tweaking a single image in post or by actually rendering out your scene in multiple passes with varying light levels. The second method is by far the best, but much more time consuming.

    Below is a quick example of the multiple exposures and the sunspot. This is not an ideal scene because it is not a panorama. In a panorama the sun would be much smaller than in this current frame. The point is for you to see the exposures that are needed to complie and HDRI.

    And you still have the problem of how you are going to compile the series of LDR images into an HDRI in the first place. Not every application allows images to be saved in .hdr format. You really should look into HDR Shop

    After reading this, the stuff Horo was saying earlier should make much more sense.

     

    Edit: OMG!!!!!! So sad! I just read that the creator of HDR Shop has passed away as of July 2016. Or maybe they just closed down? Not sure. From what I can see it is no longer available for purchase. I did find a link to the original free version that went off the grid years ago. Not sure how I feel about using it. but it is there if you look for it.

    Best of luck.

    True HDRI.jpg
    2048 x 1024 - 845K
    Post edited by Rashad Carter on
  • HansmarHansmar Posts: 2,929

    Thanks Rashad, for this very clear explanation that even I can follow!

  • avmorganavmorgan Posts: 216

    I recently picked up the Small World camera from the store, and have since been wondering about using it, rendering multiple passes at different EV's in Daz/Iray, and then using Photoshop's HDR Photomerge to create my own HDRE's. I'm just not sure it's sufficient to just change the EV setting for each render, or if the photomerge in Photoshop is good enough for that purpose.

    Any thoughts?

  • HoroHoro Posts: 10,642

    avmorgan - I'm not sure what you actually want to accomplish. There is a Small World set in the store for Bryce, the HDRIs included are in the angular map projection. Load one into Bryce and export it as spherical and it can than be directly used in Studio Iray. Studio need the HDRI in the spherical projection and as a Radiance RGBE rle-2 compressed *.HDR file, which is very common.

    If you want to fake an HDRI from a rendered scene, this can be done. You have to render several spherical panoramas with different light settings and merge them. The principle is relatively simple but elaborate and some cheating may be necessary. For such, I have a three part video on David's YouTube channel and illustrated PDF transcripts, the links are on my website (see sig). Go to Bryce & 3D CG Documents > Videos > Horo > Make Fake HDRI parts 1, 2 and 3; (More Videos 9, 10, 11).

    Merging individual renders in Photoshop is possible but I don't like the results and prefer the free Picturenaut. Spherical panoramas can be rendered in Bryce in different manners. One is to use the Spherical Mapper (available in the store) but it limits the renders to 4000 x 2000 pixels, which may or may not be enough for what you have in mind; but it's the fastest method. You can also render the six faces of a cube or use another camera setting (I have a PDF how to do this: Bryce & 3D CG Documents > Mine > Basics > Huge Panoramas (Cube & FLO)), there are also for different methods how this can be done that come with Bryce (Content > Tutorials > Horo Wernli). If you render the panorama as parts (like cube faces, etc), you need a sitching program to assemble them.

  • MatCreatorMatCreator Posts: 215

    I gave up, lol... purple fluid started trickling down the rear left side of my brain, then I went into a sneezing/hiccup attack. I then started speaking in French, and when I told myself to calm down I couldn't understand what I was saying... I took a nap and had a dream I couldn't sleep, so when I woke up I was just too tired =P

    I do so much better when the process is layed out, or even better, a video tutorial thar walks one thru the steps taken. Even if I could understand, I would still need to see how it's done. My mechanic can explain all up the wazoo why my car isn't running. I want to sit in it, put my seatbelt on, press the gas and go, LOL

    I must've watched 3 dozen tutorials and read hundreds of threads on creating the depth of field effect in studio. Camera angles, real world settings, photography basics.... and no one said: "switch to the top view and move the focal distance slider until the crosshair reaches your focal/starting point"...

    I been doing "this" kind of stuff for 20 some odd years, and truth be told, I haven't a clue what I'm doing, LOL

    But thank you for explaining. I do re-read these things and eventually it will sink in.

  • gaosigmgaosigm Posts: 0

    Oh, my God, it's really complicated

  • mindsongmindsong Posts: 1,701
    edited November 2018

    If I may rekindle this thread with a question for Horo (or anyone who may have related thoughts),

    Up above, you (Horo) mention that one shouldn't bother to 'render to disk' to achieve a larger image size in this particular process.

    I was thinking of generating a higher resolution LDRI background dome (mostly sky/clouds) that would have reasonable detail for renders in DAZ Studio, and 4000x2000 isn't quite enough, and rendering to dosk would seemingly allow for higher resolution(s)

    Can you explain why this won't work?

    ETA: from Horo's pdf document: https://horo.ch/docs/video/pdf/Transcript_MakeHDRI13.pdf

    ...

    To  prefer  the  Spherical  Mapper  to  the  Scene  Converter  has  also  a  disadvantage:  the  Scene Converter can create a light probe of up to 4000 pixels diameter; the Spherical Mapper is limited to a 1600 pixel diameter probe. If you need a larger probe, use the Scene Converter. If you need still a larger probe (Bryce cannot handle at the moment but other programs may), render the six cube faces  at  4000  pixels  per  side  to  get  an  angular map  of  5500  pixels  or  a  spherical panorama  of 14000 x 7000 pixels. From an accuracy point of view, the six cube faces give the most accurate panorama  and  the  Scene  Converter  the  worst  (though  it  is  still  very  good)  and  the Spherical Mapper is in between.

    ...

    (Horo, is that last sentence correct in your PDF?, or should the Scene Converter be the 'in between', and the Spherical Mapper be the lowest resolution option?)

    So, if I read this correctly, using Bryce as the source, a six-face render/stitch approach will create the best 'surround' LDRI environment for use as a 'scene-dome' in another program (e.g. DS/Carrara). I would assume one can still use the Spherical Mapper lens and multi-exposure instructions above to generate a useful 1000x500 HDRI as a 'matching' light source for a six-face higher-resolution LDRI generated from the same position?

    (I'm also curious what causes the Spherical Mapper and Scene Converters to have limited resolution... Bryce limitation?)

    thanks in advance,

    --ms

    Post edited by mindsong on
  • I just discovered this fascinating thread.

    I can make HDRI with my camera and combine them in a suitable application such as Aurora. These images can be very interesting and often beautiful. But I susp3ect from what I am reading here that the dynamic range in my processed photo is no where near as great as that you folks are talking about. Is that correct?

    If I am wrong about that, then can such a photo be used in DS as a HDMI background... though I know it would not provide any illumination..... Thanks.

  • HoroHoro Posts: 10,642

    mindsong - sorry, I somehow missed your comment. Bryce can render only 4000 px wide. You can always try render to disk to get a bigger picture but I had issues using this option. This is the reason for the limited resolution but you can always give it a try. The worst that happens is that you lose some time.

    The Scene converter was made for Bryce 6.x that can only use the HDRI in the Angular Map projection, Bryce 7 can use that projection, too and additionally the Spherical one, which is what the Spherical Mapper does. The spherical projection can be used in most 3D render programs (Daz Studio, Carrara, Vue, etc).

    Bendinggrass - I don't know Aurora, I use the free Picturenaut. If you are really interested in shooting photos for HDRI panoramas, you find a complete documentation to create HDRI panoramas as an illustrated PDF document with 23 pages (2MB) on my website: HDRI & Panoramas > PDF Document. I shot over 300 HDRI panoramas and took the opportunity to make a lot of mistakes. The document should help you not to repeat them.

  • I just discovered this fascinating thread.

    I can make HDRI with my camera and combine them in a suitable application such as Aurora. These images can be very interesting and often beautiful. But I susp3ect from what I am reading here that the dynamic range in my processed photo is no where near as great as that you folks are talking about. Is that correct?

    If I am wrong about that, then can such a photo be used in DS as a HDMI background... though I know it would not provide any illumination..... Thanks.

  • mindsongmindsong Posts: 1,701

    Thanks Horo,

    Quick question/idea for creating an image exposure series in Bryce: set up the desired scene as you wish, crank the lights up to the highest values of you series, place a transparent plane in front of the camera, then shoot the sequence with varied levels of decreasing transparency of the plane... It could be quite accurate, and such a 'filter' could be placed in front of other 'lenses', etc.

    Do you think it would work as expected?

    Cheers,

    --ms

  • just butting in

    I use Microsoft ICE to stitch photos and panning videos into 360 images

    https://www.microsoft.com/en-us/research/product/computational-photography-applications/image-composite-editor/

    maybe using a rotating camera in Bryce with the largest resolution you can do you can create a suitable big 360 image, it worked using Skyrim screencaptures too!

  • HoroHoro Posts: 10,642

    mindsong - good idea with the ND-filter but the result you can also get without it. You see, you need the visible lights bright in order to get them as light sources in the HDRI. Make the objects that light the scene (i.e. the bulbs and bars of a fluorescent) visible and fully bright (perhaps with a colour tint like yellow for bulbs or blue for fluorescent). You can use Ambient to make them visible. Start with a fully lit room, perhaps even excessively lit one and render the panorama. Then reduce the light output of each light source to half or even a fourth but keep the visible bulps fully bright. Render, repeat and perhaps for the last and second last dark renders, reduce the brightness of the visible sources (reduce the ambience). Merge the renders to an HDRI and when you use it in Bryce IBL, you will get the strong light from the direction of the "bulbs" and the ambient light from the scene. It needs experimentation how many renders are needed and when to reduce the brightness of the "bulbs". Render small panoramas, 1024 x 512 or so and test the result. Once you know how each "exposure" has to be set, render at full size spherical or render the cube faces. If you render cube faces, assemble each "exposure" to a pano and finally merge all panos to an HDRI. Do not forget to disable Gamma correction and 48-bit Dithering and Export as 48-bit TIF.

    Wendy_Carrara - not sure this will result in a spherical panorama, I think it will rather be in the cylindrical projection.

  • Horo said:

    mindsong - good idea with the ND-filter but the result you can also get without it. You see, you need the visible lights bright in order to get them as light sources in the HDRI. Make the objects that light the scene (i.e. the bulbs and bars of a fluorescent) visible and fully bright (perhaps with a colour tint like yellow for bulbs or blue for fluorescent). You can use Ambient to make them visible. Start with a fully lit room, perhaps even excessively lit one and render the panorama. Then reduce the light output of each light source to half or even a fourth but keep the visible bulps fully bright. Render, repeat and perhaps for the last and second last dark renders, reduce the brightness of the visible sources (reduce the ambience). Merge the renders to an HDRI and when you use it in Bryce IBL, you will get the strong light from the direction of the "bulbs" and the ambient light from the scene. It needs experimentation how many renders are needed and when to reduce the brightness of the "bulbs". Render small panoramas, 1024 x 512 or so and test the result. Once you know how each "exposure" has to be set, render at full size spherical or render the cube faces. If you render cube faces, assemble each "exposure" to a pano and finally merge all panos to an HDRI. Do not forget to disable Gamma correction and 48-bit Dithering and Export as 48-bit TIF.

    Wendy_Carrara - not sure this will result in a spherical panorama, I think it will rather be in the cylindrical projection.

    https://www.deviantart.com/wendyluvscatz/art/A-651685202

    well yes it does

  • HoroHoro Posts: 10,642

    Wendy_Carrara - oh, that's cool.

Sign In or Register to comment.