Should Tone Mapping Negatively Effect Render Speeds?

evilded777evilded777 Posts: 2,465
edited November 2015 in Daz Studio Discussion

Title says it all... or, rather, title asks it all.

I am working on an article talking about using reasonable light settings and tone mapping and my experiments have led me to the unfortunate conclusion that doing things "correctly" leads to longer render times.  That's disappointing and not at all what I expected (nor, frankly, is it what the Iray Dev Blog seems to suggest).

Am I missing something? Because we aren't talking a little bit of time here... we're talking 2x or 3x longer.

Simple scene with not much in the way of reflective surfaces, 1 human figure, 3 point light setup and environment lighting disabled.  Its all on the lights and tone mapping.  And my article premise is being kicked in the gut by my experimental results.

Post edited by evilded777 on

Comments

  • Gr00vusGr00vus Posts: 372
    edited November 2015

    I've experienced the same thing. In anything like less than daylight, photometric light objects + required tone mapping adjustments to deal with less light in the scene = much slower render times. Even using an "indoor" hdri lighting scheme with required tone mapping adjustments = much slower render times.

    I imagine adjusting the tone mapping settings from the defaults results in plenty of extra calculations when computing the color value of each pixel. I wonder if for some reason the tone mapping adjustment calculations happen on the CPU rather than on the GPU if the GPU is being used to do the rest of the calculations. We'd have to know some gritty details of the calculation pipeline/flow to better understand where the extra time is being spent.

    It makes me sad because, no matter how good an hdri is, my experience has been that using photometric light objects results in a more realistic image.

    Post edited by Gr00vus on
  • ToborTobor Posts: 2,300

    Iray wants brighter scenes so that it can more accurately calculate convergence. The more "photons" the more hits; the more hits. the faster the convergence is calculated. Every pixel needs multiple samples, and those samples come from the light in your scene. So, if you're lighting a scene without as much light, and then boost up the levels with tone mapping, you're likely to get slower rendering. 

    How much light is enough to produce a faster render? What's the length of a piece of string? I think it all comes down to a myriad of things, and every scene comes with its own surprises.

    Now, if you're saying that you've gone the other way -- bright scene then tone mapped darker to compensate -- I haven't seen that myself (nor can I envision many reasons to do this, but I thought I'd mention). An example pic would be helpful, and do remember to make note if you're using the 4.9 beta.

  • ToborTobor Posts: 2,300

    As an additional aside, the tone mapper shouldn't add too much in the calculations. Iray internally renders to a 32-bit canvas and several stops worth of exposure levels. If you think it's the tone mapper causing the slowdown, rather than the light levels that the tone mapper is compensating for, you can always turn the Iray tone mapper off, and render to a Beauty canvas. When done, save it, then open the EXR file (it's a 32-bit image) in an application like Photoshop that can handle HDR images. You can do your tone mapping in that application instead.

  • evilded777evilded777 Posts: 2,465

    Tobor... definetly using "less" light and tone mapping UP. The whole point of the article is about Iray being photo real and using "physically plausible" light setups... so no spotlights set at 100,000 lumens allowed.

    I am using the most current Beta.

    That's an interesting tip about the Beauty Pass, I might use that for my own work; but thats way above the intended audience of this article.

    I have some other thoughts to explore that might remedy the situation, but only experimentation will tell if they are valid.

    Its just too bad that I am making these suggestions to get more realistic renders and lessen some of the potential issues, but there is this awful caveat.

  • Gr00vusGr00vus Posts: 372

    I wouldn't be too steadfast about this - the lumens setting will need to relate to the geometry of the light - its size and its emission angle. If you've made a "big" light with a large emission angle, you'll probably need to increase the lumens to compensate for the greater volume of space you're trying to light with the light source. Of course, if you're attempting to adhere to reality, you'd probably only have to resort to that when modelling a large floodlight or something like it, otherwise you'd be modeling lighting of such volumes with multiple lights sources.

     

    Adhering to this principle definitely results in a much more lengthy (and tedious) set up phase for the scene than just pumping up one or two light sources to unreal levels. I guess that's what real world light riggers have to deal with, to a much greater degree of course.

    no spotlights set at 100,000 lumens allowed

     

  • ToborTobor Posts: 2,300

    Tobor... definetly using "less" light and tone mapping UP. The whole point of the article is about Iray being photo real and using "physically plausible" light setups... so no spotlights set at 100,000 lumens allowed.

    While Iray is "physically based" that doesn't mean it follows real-world situations. Even in the movies it turns out there's a lot of trickery needed to make a scene look good, and in fact, the lighting is seldom "real." 

    D|S and Iray lighting sources actually do follow physical laws, and it could actually be perfectly correct to have a spotlight putting out 100,000 lumens. The problem is that many people misunderstand how the values are used in D|S. For your article and/or general interest:

    1. In Iray lumens are a measurement for the *entire* emitter surface. What many people do is increase the spotlight emitter to make it a soft box, which also diffuses and therefore spreads out the light. Yet they still expect 5,000 lumens (typical of a theatrical spotlight) to provide enough light for their scene. It doesn't work that way in real life, and it doesn't work that way here. You can focus the beam to concentrate those lumens to a smaller area. But that gives you sharper shadows. Or you can broaden the beam to get mild or no shadows. But this has to also compensating by boosting he light output. These laws of physics have to be followed, too.

    2. Not all the light sources in D|S use the same geometry, so light output is not easily converted between instruments. A spotlight can have a flat plane emitter, and therefore, its lumens are projected straight from that plane. Or, it can use a spherical emitter, and have only a portion of its rays directed toward the scene -- less incident light, even as the lumens remains the same. The same goes for a pointlight, which by its nature (unless it's using an IES profile), projects its light 360 degrees, spreading it in all directions, including away fro mthe camera. Simple surface area calculations show that 10,000 lumens projected from a 30cm plane will provide quite a bit more light on the scene than 10,000 lumens from a 30cm sphere.

    3. Other instruments, like the distant light, calculate their lumens as light incident on the scene, not emitted at the lamp. Therefore, an "unrealistic" 10 lumens for a distant light will create about the same illumination as mid-day sun. Wha???  Actually, it's not unrealistic at all, because the sun does in fact produce about 9.3 lumens per square centimeter at mid-day.

    4. Emissive geometries are the least understood, but they also follow physics quite well. There are luminosity settings that rate in lumens or watts, so these don't consider surface area. A 1cm sphere will have a different light profile than a 1m sphere. Yet other settings *do* consider area. I like to use the cd/cm^2 setting, which is candles per centimeter square. I prefer centimeters, because that's the default scene unit for the rest of D|S. Here, light output per square cenimeter remains unchanged even if the geometry is resized.

    As for the increase in rendering speed under certain lighting, again I'll mention the needs of the renderer to calculate convergence. Samples and pixels and convergence aren't part of real life, but they're very much a real part of computer graphics. Darker scenes mean less ray hits, and it's ray hits that calculate convergence. The convergence ratio is the biggest contributor to render speed. The general (but imperfect) solution to this is to provide more light to Iray, then compensate with tone mapping to provide the brightness you want for the scene.

    As a point of reference, we've been fudging for well over 100 years. If you ever practiced chemical photography, you might know about a very common "unrealistic" technique of artificially boosting image intensity on film by leaving it in the developer solution for longer (called push processing). This method might be done for a number of reasons, not all of them because the original scene was too dim to get a good image. Push processing could also be used to increase contrast (makes for more believable "day-for-night" scenes), or to increase film grain for a gritty look (Clint Eastwood liked to do this).

    Bending the rules to achieve an artistic result is really nothing new. You could get that realistic image using realistic settings, but expect a very long wait for it. At some point, one must also consider practicality over reality.

  • evilded777evilded777 Posts: 2,465
    edited November 2015

    Tobor, thanks for that.

     

    The whole "100,000 lumens" thing was just hyperbole, it was not intended as a specific example. I suppose 1 million or more lumens would have been a more ridiculous number to use, but now I'm not even really sure of that.

    I know all these things, at least superficially.  And part of the thrust of the article is about when to bend or break the rules but to know why you are doing it. I guess I just expected a different result.

    Part of me rebels at the idea of "throwing light at the problem" because of the old but sometimes still ongoing gamma correction debate, but I guess I need to see that for what it is (part of a different discussion) and let it go.

    Just need to re-think the article a bit. I still think its better to do things "Correctly" (though with my sample render, I can't see a damn difference between the over-cranked lights version and the tone mapped version).

    What are the benefits, if any, of using Tone Mapping vs cranking the lights? Seems to me like those lights are going to cause a problem somewhere...

    Post edited by evilded777 on
  • ToborTobor Posts: 2,300
    edited November 2015

    When Iray fist came out I thought its light calculations must be wrong, because the values didn't make any sense. But then after lots of experimentation and trying different things (and looking at the actual MDL files where these things are defined), I realized the math does work. The huge numbers come about because the kinds of lights rigs in D|S for Iray are themselves non-PB. Example: People like to use emissive geometries, but in the real world, pure wall-sized lights are quite rare. Broadlights are typically made using other techniques that, like a searchlight, concentrate its lumens into a more confined space. Blasting out light into a 180 degree hemisphere is not very efficient.

    As I said, you *can* use realistic light levels and camera settings for renders, but the drawback is the marked increase in rendering times, simply because of the lack of ray samples for each pixel. If your PC has a couple of Titan X cards in it, or you are using a nVidia VCA (with something like 26,000 cores), these kinds of tricky hyperreal scenes pose no problem at all.

    As few of us have this type of hardware at our disposal, the workable alternative is to crank up the lights to give Iray some photons to work with, and then compensate with the tone mapping to prevent washing out. That's not much different to how those glorious Technicolor movies of the 40s and 50s were made. The special Technicolor camera required literally 3X the light, because it used a beam splitter to expose three separate strands of high resolution black and white film. The added light had to overcome the technical requirement for getting a good exposure. That's all we're doing with Iray.

    As for your question (and except in extremes), tone mapping alone doesn't alter the number of light rays in a scene. So you can't solve the render times problem just by chosing a very fast ISO or ultra-slow shutter speed. In order to render faster, you add more photons. You then dial the tone mapping to compensate for the over-exposure. This is not ideal, but it can be a good tradeoff between realistic lighting effects, and render efficiency.

    Post edited by Tobor on
  • evilded777evilded777 Posts: 2,465

    Educational, as always, but I think you missed the thrust of the question; so I shall rephrase: Are there cicrumstances under which just cranking the lights up is going to be problematic? I thought maybe with highly reflective surfaces or refraction... but in my admittedly simple, early tests here, I'm not running into problems.  And I'm cursing myself at this point for having wasted time and breath evangelising this approach and using it in my own renders (although, one of my early low-ligh renders did get some good commentary and at least one person called it the most photo real they had ever seen).

  • ToborTobor Posts: 2,300

    I did miss this aspect of thde question, and I have to say, I don't know. I haven't encountered anything, but I haven't done a lot of stress tests in this area. This is ideal research for someone with a Titan X or other high-end card, so that the results don't take a month of Sundays to compile.

    I think as long as the lights in the scene are balanced among themselves, and the increase is within (maybe) 8-9 stops, I wouldn't think it would be too problematic. Iray internally renders with a far wider latitude than what you see in the popup window -- perhaps 4-5 stops -- so it's already dealing with more brightness information than what we see in the final result. (We get this range if we save to canvases and EXRs.)

    I don't see a reason to curse yourself that you use real-world light/exposure values, especially as starting points. I think the practice should be encouraged, but with a footnote to talk about how to break the rules for tough scenes. D|S doesn't yet support all the methods Iray offers for dealing with unusual or low light levels; for example, I don't think sky portals are yet implemented, even in the 4.9 beta. That one feature would help render scenes that are lit mostly from indirect lighting external to the visible scene.

  • evilded777evilded777 Posts: 2,465

    This was quite a learning experience for me.  I love to learn.

    Now... if someone could explain to me why mixing light types leads to an exponentially longer render time, I'd be even happier.

    I'm going to take a stab at answering it myself and suggest that mixing light types (Lights, emissive surfaces -- especially em surfaces that are not cm^2 it seems, and HDRI or Sun and Sky) is computationaly expensive. Why? Do we need to convert the different light data to some standard? Or is it just dealing with the variety of data that is expensive?

    I've noticed this when adding more lights as well... its not just mixing lights, but adding numbers of lights becomes more expenisve quickly.

    It seems the best advice may be to do the job with as few lights as possible and make them of a reasonable intensity that will allow for some sort of manipulation to achieve the desired end result.

    On a recent render, I resorted to a render pass with emissive surfaces and then a seperate pass with a single, broad fill light.  I actually ended up with a better image, IMO, than the overely complicated light setup that drove me to find a simpler solution... but I would have liked a little more control of the fill without: a) having to do more passes or b) jacking up the render time because of the number of lights.  But maybe that is impossible on my current hardware.

  • ToborTobor Posts: 2,300

    When Iray in D|S was young, someone here made the comment that "Iray loves lights," and so the train of thought since then was to just keep adding lights. Unfortunately, that's not entirely correct. Iray likes light, but as with any renderer that independently calculates light paths, each separate light source requires a computation.

    I haven't done experimenting to see what combinations of light sources yields the fastest render, but I do know an HDRi alone is about the fastest you can get. I would imagine (but have no hard evidence) that Sun/Sky Only would come in second. I almost never use that mode, and instead most of my renders are with an HDRi coupled with two and sometimes three spotlights, with each spotlight acting more like a diffuse skybox. Those renders remain fairly fast, and they need to, as my hardware is limited to under 500 cores.

    Occasionally, I render with an emissive, but generally only when it's for in-scene lighting. Adding it drags down the render speed, but it's necessary for the scene, so there's not much to be done. I've found there are plenty of other things that slow down renders even more -- a "noisy" metal flake pattern in the shader, for example, really makes it crawl. 

  • evilded777evilded777 Posts: 2,465

    It sure would be nice if there was some better documentation on Canvases, because it would make some of this easier; alas, they do not always work as one might expect and for no apparent reason.

Sign In or Register to comment.