What is "Convergence"

ScavengerScavenger Posts: 2,674
edited December 1969 in The Commons

It's a term in iray...a percentage to hit for quality levels....

What is converging?

«1

Comments

  • Richard HaseltineRichard Haseltine Posts: 102,792
    edited December 1969

    Scavenger said:
    It's a term in iray...a percentage to hit for quality levels....

    What is converging?

    I believe it is looking to see how much the pixels vary from pass to pass, when they hit a constant value they are taken as done. But that's inference, not fact, so could well be wrong.

  • HavosHavos Posts: 5,403
    edited December 1969

    Converging means several things moving towards a single destination. In the context of iRay this means how far it has to go (in percentage terms) for the render to be considered finished.

  • ScavengerScavenger Posts: 2,674
    edited December 1969

    Havos said:
    Converging means several things moving towards a single destination. In the context of iRay this means how far it has to go (in percentage terms) for the render to be considered finished.

    Havos...but what is converging. Richard's theory seems reasonable, though there'd need be some kind of revision I'd think in between passes, or each iteration would be the same, no?

  • HavosHavos Posts: 5,403
    edited December 1969

    Scavenger said:
    Havos said:
    Converging means several things moving towards a single destination. In the context of iRay this means how far it has to go (in percentage terms) for the render to be considered finished.

    Havos...but what is converging. Richard's theory seems reasonable, though there'd need be some kind of revision I'd think in between passes, or each iteration would be the same, no?

    Sorry, I read your question as "What is" converging, rather than "What" is converging, my mistake.

    Richard may indeed be correct, but it is a tricky one. I mean it certainly would not represent the percentage of change done as the image changes massively at the start, and often looks quite close to the final value render after just a few percent. I have seen renders barely change after 10%, but others with a lot of noise after 50%, and clean up nicely much later.

    I suspect, and this is a programmers perspective (which is what I am), that it is measuring the amount of ray trace paths looked at, and then considering the number that might be needed to get a good render based on available light, size of render area, number of polys etc. But then I am just guessing.

  • Kevin SandersonKevin Sanderson Posts: 1,643
    edited December 1969

    From NVIDIA Advanced Rendering: http://www.nvidia-arc.com/products/iray/about-iray.html

    "Interpolation techniques, which trade final quality, predictability, and simplicity of scene specification for performance, form the core of most current global illumination renderers. Unlike them, iray rendering is based on deterministic and consistent global illumination simulation algorithms that converge without introducing persistent approximation artifacts."

  • Peter WadePeter Wade Posts: 1,642
    edited June 2015

    Richard's answer sounds right to me. From what little maths I can remember from my college days, if you have a function which keeps getting closer to a final value but never quite reaches it you say it converges on that value.

    Post edited by Peter Wade on
  • SnowSultanSnowSultan Posts: 3,645
    edited June 2015

    “Interpolation techniques, which trade final quality, predictability, and simplicity of scene specification for performance, form the core of most current global illumination renderers. Unlike them, iray rendering is based on deterministic and consistent global illumination simulation algorithms that converge without introducing persistent approximation artifacts.”


    LOL and this is why programmers need to be paired with 'everyday' speakers. What a glorious load of techno-babble that last sentence is!

    Post edited by SnowSultan on
  • alexhcowleyalexhcowley Posts: 2,392
    edited December 1969

    “Interpolation techniques, which trade final quality, predictability, and simplicity of scene specification for performance, form the core of most current global illumination renderers. Unlike them, iray rendering is based on deterministic and consistent global illumination simulation algorithms that converge without introducing persistent approximation artifacts.”


    LOL and this is why programmers need to be paired with 'everyday' speakers. What an glorious load of techno-babble that last sentence is!

    I am an analyst programmer, with over thirty year's experience of the dark and evil arts, and even I had trouble figuring out what that hefty chunk of prose means.

    Cheers,

    Alex.

  • almahiedraalmahiedra Posts: 1,353
    edited December 1969

    “Interpolation techniques, which trade final quality, predictability, and simplicity of scene specification for performance, form the core of most current global illumination renderers. Unlike them, iray rendering is based on deterministic and consistent global illumination simulation algorithms that converge without introducing persistent approximation artifacts.”


    LOL and this is why programmers need to be paired with 'everyday' speakers. What an glorious load of techno-babble that last sentence is!

    I believe that in nvidia there are champeons of techno-babble. In Iray documents, I can't believe how many they spinning in the words for simply explain the array idea, for example. I'm mathematician and nonetheless they exhausted me with over explained concepts. They want explain all in one moment for all audiences. Some explanations are brilliant but other make it tired.

  • McGyverMcGyver Posts: 7,066
    edited December 1969

    Yeah... not to interrupt, but I hate when people take a term or idea that has been around for a while and give it a new spin or totally different name, or they come up with a new feature and name it something illogical or non descriptive.
    CGI is so full of these baffling choices in terminology its a wonder anyone knows what anything does.
    Especially annoying is when the description of what the term means is as unless as the chosen name.

  • SnowSultanSnowSultan Posts: 3,645
    edited June 2015

    It still kinda makes sense if you read it slowly...

    " iray rendering is based on deterministic and consistent global illumination simulation algorithms that converge without introducing persistent approximation artifacts.”"


    possible translation?

    "Iray's global lighting doesn't create artifacts as it approaches completion" :)



    The new Iray surface parameters have names that seem overly confusing too. It was easy to experiment with 3Delight parameters, but now it's difficult to know what the parameters do, and sometimes multiple settings have to be set properly to see even a slight change. True Iray surface presets are going to be extremely valuable.

    Post edited by SnowSultan on
  • ScavengerScavenger Posts: 2,674
    edited December 1969

    It still kinda makes sense if you read it slowly...

    The new Iray surface parameters have names that seem overly confusing too. It was easy to experiment with 3Delight parameters, but now it's difficult to know what the parameters do, and sometimes multiple settings have to be set properly to see even a slight change. True Iray surface presets are going to be extremely valuable.

    You can say that again!

  • grinch2901grinch2901 Posts: 1,246
    edited June 2015

    From NVIDIA Advanced Rendering: http://www.nvidia-arc.com/products/iray/about-iray.html

    "Interpolation techniques, which trade final quality, predictability, and simplicity of scene specification for performance, form the core of most current global illumination renderers. Unlike them, iray rendering is based on deterministic and consistent global illumination simulation algorithms that converge without introducing persistent approximation artifacts."

    In computing, rather than solve an equation/algorithm using the formulas, you have to do something called numerical approximation, that is you convert something like a differential equation into a series of successive additions, each one getting a little closer to the answer. But it's like moving halfway towards a wall with each step, you get closer with each one but never quite get there. But that first step is large, the next not so much, the third really kind of small and the fourth is tiny. Some threshold is set to say when it's "close enough". You never get to "done" but the algorithm has a way to say that more calculations are sort of pointles because they make almost no difference. At that point it is said to have converged on the solution.

    Based on the above, there is some global illumination metric that is being used to say that it is "close enough", perhaps the change in the luminance of each pixel, averaged over all pixels or something like that.

    Post edited by grinch2901 on
  • MistaraMistara Posts: 38,675
    edited December 1969

    convergence like crossing the streams? :lol: when Han Solo meets Indiana Jones

  • ScavengerScavenger Posts: 2,674
    edited December 1969

    Trying to simplify it in my mind, so if say the target is 5, the formula spits out like a low of 1 and a high of 9, and each round it measures some metric's difference between them? Something like that?

  • fixmypcmikefixmypcmike Posts: 19,613
    edited December 1969

    Each iteration Iray looks at how much has changed since the previous iteration. When the amount of change is below a certain threshold Iray decides that it's "close enough".

  • SpottedKittySpottedKitty Posts: 7,232
    edited December 1969

    The new Iray surface parameters have names that seem overly confusing too. It was easy to experiment with 3Delight parameters, but now it's difficult to know what the parameters do, and sometimes multiple settings have to be set properly to see even a slight change.

    Can I nominate this for "best comment of the update"? I'm struggling my way through figuring out what all the new bells and whistles do, but I'm hampered by not knowing what questions to ask, and whether this bell is more significant than that whistle.

    It doesn't help as much as DAZ probably hoped to just say "look at the online manual". I can't find what I'm looking for, because I need to know the meaning of the name of what I'm looking for, before I can try to look for a link in the right place.

    (Did that make sense? I thought it did when I wrote it, but now I'm not so sure.)

  • HeraHera Posts: 1,958
    edited December 1969

    “Interpolation techniques, which trade final quality, predictability, and simplicity of scene specification for performance, form the core of most current global illumination renderers. Unlike them, iray rendering is based on deterministic and consistent global illumination simulation algorithms that converge without introducing persistent approximation artifacts.”


    LOL and this is why programmers need to be paired with 'everyday' speakers. What an glorious load of techno-babble that last sentence is!

    I am an analyst programmer, with over thirty year's experience of the dark and evil arts, and even I had trouble figuring out what that hefty chunk of prose means.

    Cheers,

    Alex.

    And I blamed my status as a 2nd language speaker for being lost

  • ScavengerScavenger Posts: 2,674
    edited December 1969

    Each iteration Iray looks at how much has changed since the previous iteration. When the amount of change is below a certain threshold Iray decides that it's "close enough".

    But what is changing in each iteration to make results different?

  • SnowSultanSnowSultan Posts: 3,645
    edited December 1969

    Can I nominate this for “best comment of the update”?

    Thanks. :) I've read the manual DAZ provided and also Sickleyield's deviantArt journal regarding these parameters, but the biggest sticking point for me is just how so many functions have to work in unison with others for anything to happen now. You don't just turn one dial to get specularity now, it takes an understanding of top coat settings and refraction and fresnel and whatnot to get it right.

    In time, we'll probably be able to piece together settings from various products and use them on whatever similar figures and objects we need. For now though, it's a pretty confusing mess for anyone who isn't already very familiar with PBR parameters.

  • fixmypcmikefixmypcmike Posts: 19,613
    edited December 1969

    Scavenger said:
    Each iteration Iray looks at how much has changed since the previous iteration. When the amount of change is below a certain threshold Iray decides that it's "close enough".

    But what is changing in each iteration to make results different?

    The characteristics of each pixel -- as you go along, the amount of change in each iteration decreases, so the amount of benefit from another iteration becomes smaller and smaller. You're never going to reach a point where it is "complete" (i.e. another iteration produces no change) so you eventually have to declare the improvement from another iteration to be too small to be worthwhile.

  • SpottedKittySpottedKitty Posts: 7,232
    edited December 1969

    It doesn't help as much as DAZ probably hoped to just say "look at the online manual". I can't find what I'm looking for, because I need to know the meaning of the name of what I'm looking for, before I can try to look for a link in the right place.
    Following on from my earlier comment, an excellent example — I finally stumbled across (it ought to be a lot easier to find things in the manual!) this section about setting the Top Coat material parameters:-

    Top Coat Weight - When this property is set to more than 0, it adds the top most layer to the shader. Use examples would car paint or varnish. Its value can be range from 0.0 to 1.0.

    When set to more than 0, it opens several other properties.

    Very informative... unless you wanted to know what the Top Coat Weight means, and what it does apart from open the extra parameters. :-/

    The same thing goes for the section on Base Bump, which is what I was originally looking for:-

    Base Bump - The Base bump is set using a typical bump map. It works the same way as bump in 3Delight, but is dependent on the image map as there is no min/max value. When converting materials from 3Delight to Iray, the bump will need a higher value in Iray than the original material used. For this reason, the value slider has a range of 0 to 50.

    In this case, why isn't there a min/max setting, and 0 - 50 what?

  • SnowSultanSnowSultan Posts: 3,645
    edited December 1969

    In this case, why isn’t there a min/max setting, and 0 - 50 what?

    The few times I did tests with bump maps, I just used the live render preview and turned the bump value up until I could see it. Maybe 0 is the equivalent of -1.00? I don't know.


    One thing that still puzzles me is having to set a negative value for SSS in order to determine light bounce direction. Subsurface settings are really confusing now compared to how they were in 3Delight. It would be nice to get a large package of subsurface presets for Iray like the one Age of Armour (I think?) did for the SSS shader.

  • ScavengerScavenger Posts: 2,674
    edited December 1969

    Scavenger said:
    Each iteration Iray looks at how much has changed since the previous iteration. When the amount of change is below a certain threshold Iray decides that it's "close enough".

    But what is changing in each iteration to make results different?

    The characteristics of each pixel -- as you go along, the amount of change in each iteration decreases, so the amount of benefit from another iteration becomes smaller and smaller. You're never going to reach a point where it is "complete" (i.e. another iteration produces no change) so you eventually have to declare the improvement from another iteration to be too small to be worthwhile.

    Maybe I'm not asking the right question.

    How or why does the characteristics of each pixel change. Grinch invoked Zeno's Paradox earlier in the the thread, but to get halfway to the wall, you have to know where the wall is.

    In theory (and simplifying it down) there's a formula that's x * 2 = pixel that's done in rendering. and then the next iteration, the delta between whatever quality of pixel is measured to determine the convergence.

    So where my question is...how is it figuring next X. If X=10 for 100% convergence, does it go {5, 6, 7, 8, 9...} or is it going {5,15, 7.5, 12.5...}... I would think it has to be the latter, because if it was the former..why doesn't it start at 10.

    Maybe my question is..Why is convergence used, rather than hitting the final value..that's what 3Delight does.

  • fixmypcmikefixmypcmike Posts: 19,613
    edited December 1969

    You don't actually know what the final value is -- all you know is that each iteration should be a closer approximation, and that the amount of change from one iteration to the next decreases as the number of iterations increases. So you measure how much changed in the latest iteration, and when the amount of change is small enough you declare it close enough, that it isn't worth continuing for such little improvement.

  • ScavengerScavenger Posts: 2,674
    edited December 1969

    So it's using the equivalent of the {5,15, 7.5, 12.5…} method to find 10 ?

  • SixDsSixDs Posts: 2,384
    edited December 1969

    Since we don't know what the actual algorithms are, and Nvidia is unlikely to share them for fear of revealing their secret sauce, we can speculate until the proverbial cows come home and still be no closer to any precise understanding of what is going on during a render. The explanations available, like that posted earlier, read like technobabble not because of a failure to communicate clearly, but because of a deliberate attempt to obfuscate in a seemingly impressive manner. It was probably written by a marketer, rather than a programmer, who represents the commercial equivalent of a political spin doctor.

    As for convergence, as others have said, that logically involves approaching an acceptable value, bearing in mind that we're talking about a great many complex calculations taking place under the hood. I believe that it is wrong to think in terms of absolute values, though, for example a target value of ten. Instead, the engine is looking at the deltas - the changes that are occurring as the calculations progress. Those changes will be large initially (and the magnitude will vary depending on the size of the render job) and grow progressively smaller as the process nears completion - in other words as the delta between iterations approaches zero. Given that the delta will decline as this happens, the approach to zero will slow until there is no practical reason to continue given the diminishing returns, and the render engine programming therefore is designed to call it quits when little further progress is being made.

    So how does the render engine know if the point at which the changes are minimal is the point at which the render is nearly perfect, some might ask? Well, it doesn't. It simply knows that there is little point in carrying on, and if the results are insufficient from the user's perspective, the user needs to look at changing up the parameters that they have specified to achieve the desired results. The render engine is dumb as a post and is merely following instructions, part of which are hard coded, and part of which are specified by the user. When it throws in the towel, it is saying "that is the best I can do with what you gave me".

    Do I know this all for a fact? Nope. But it is what I believe Occam's Razor would suggest.

  • Dolce SaitoDolce Saito Posts: 195
    edited December 2020

    Old topic, but confusion is left. Let's see:

     

    When rendering a 1920x1080 image, the final image begins with completely white noise of 2.073.600 pixels (1920*1080). %0 converged.

    Then Iray tries to shoot "rays" from light sources to the objects where your camera is facing. Every hit pixel on the scene is converted from "white noise" to "illuminated" pixel.

    Thus, Convergence percent = Illuminated pixel count by rays / white noise pixels (pixels yet to be hit by rays).

    Edit: Also a single pixel's proper (subpixel) illumination happens when it is hit by set of rays with different angles. This depends on the "rendering quality" slider.

    Due to how surfaces reflects or absorbs light in the scene, the likelyhood of being hit by the rays will be harder/easier.

    For example, if the scene has "dim" light(s), it will be harder to illuminate these surfaces where camera is looking. Because most of the light will be absorbed before illuminating pixels on the scene.

    Also, the higher the convergence, the longer it will take iray to find and shoot correct angled rays from sources to those white noise pixels.

    I hope this makes sense.

    Post edited by Dolce Saito on
  • watchdog79watchdog79 Posts: 1,026

    Oh well. When I hear "convergence" I always imagine the point where the lines of fire of the wing mounted guns of a WWII fighterplane meet and cross the straight line of sight of the pilot, if seen directly from above or below.

  • felix_nukem said:

    Old topic, but confusion is left. Let's see:

     

    When rendering a 1920x1080 image, the final image begins with completely white noise of 2.073.600 pixels (1920*1080). %0 converged.

    The render starts blank.

    Then Iray tries to shoot "rays" from light sources to the objects where your camera is facing. Every hit pixel on the scene is converted from "white noise" to "illuminated" pixel.

    Thus, Convergence percent = Illuminated pixel count by rays / white noise pixels (pixels yet to be hit by rays).

    Iray traces paths from source to surface to surface until it stops bouncing (I think that is handled by soem kind of energy threshold). That pixel is then coloured. Other paths will also resolve to that pixel, with different colours  depending on their light source and which other surfaces were involved. Iray combines those results to keep updating the colour.

    Convergence, as I understand it, is a measure of how settled the pixel is - initially it will vary a lot as new paths affect it, but over time they should just be slight variations on its current colour and when the chnage is "small enough" it will be counted as converged.

    Due to how surfaces reflects or absorbs light in the scene, the likelyhood of being hit by the rays will be harder/easier.

    Yes

    For example, if the scene has "dim" light(s), it will be harder to illuminate these surfaces where camera is looking. Because most of the light will be absorbed before illuminating pixels on the scene.

    Broadly yes - dim isn't the issue, though, but rather how likely a light path is to hit the surface in question; a bright point light will covnerge more slowly than a scatering of dimmer lights with a lower total luminance.

    Also, the higher the convergence, the longer it will take iray to find and shoot correct angled rays from sources to those white noise pixels.

    Again, it's not a matter of hit-and-done - it's a matter of being hit enough to settle on a final value. Higher convergence requires more pixels to be deemed converged, the quality setting determines how settled the values have to be to count as converged.

    I hope this makes sense.

Sign In or Register to comment.