What is "Convergence"

It's a term in iray...a percentage to hit for quality levels....
What is converging?
You currently have no notifications.
It's a term in iray...a percentage to hit for quality levels....
What is converging?
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
I believe it is looking to see how much the pixels vary from pass to pass, when they hit a constant value they are taken as done. But that's inference, not fact, so could well be wrong.
Converging means several things moving towards a single destination. In the context of iRay this means how far it has to go (in percentage terms) for the render to be considered finished.
Havos...but what is converging. Richard's theory seems reasonable, though there'd need be some kind of revision I'd think in between passes, or each iteration would be the same, no?
Havos...but what is converging. Richard's theory seems reasonable, though there'd need be some kind of revision I'd think in between passes, or each iteration would be the same, no?
Sorry, I read your question as "What is" converging, rather than "What" is converging, my mistake.
Richard may indeed be correct, but it is a tricky one. I mean it certainly would not represent the percentage of change done as the image changes massively at the start, and often looks quite close to the final value render after just a few percent. I have seen renders barely change after 10%, but others with a lot of noise after 50%, and clean up nicely much later.
I suspect, and this is a programmers perspective (which is what I am), that it is measuring the amount of ray trace paths looked at, and then considering the number that might be needed to get a good render based on available light, size of render area, number of polys etc. But then I am just guessing.
From NVIDIA Advanced Rendering: http://www.nvidia-arc.com/products/iray/about-iray.html
"Interpolation techniques, which trade final quality, predictability, and simplicity of scene specification for performance, form the core of most current global illumination renderers. Unlike them, iray rendering is based on deterministic and consistent global illumination simulation algorithms that converge without introducing persistent approximation artifacts."
Richard's answer sounds right to me. From what little maths I can remember from my college days, if you have a function which keeps getting closer to a final value but never quite reaches it you say it converges on that value.
LOL and this is why programmers need to be paired with 'everyday' speakers. What a glorious load of techno-babble that last sentence is!
LOL and this is why programmers need to be paired with 'everyday' speakers. What an glorious load of techno-babble that last sentence is!
I am an analyst programmer, with over thirty year's experience of the dark and evil arts, and even I had trouble figuring out what that hefty chunk of prose means.
Cheers,
Alex.
LOL and this is why programmers need to be paired with 'everyday' speakers. What an glorious load of techno-babble that last sentence is!
I believe that in nvidia there are champeons of techno-babble. In Iray documents, I can't believe how many they spinning in the words for simply explain the array idea, for example. I'm mathematician and nonetheless they exhausted me with over explained concepts. They want explain all in one moment for all audiences. Some explanations are brilliant but other make it tired.
Yeah... not to interrupt, but I hate when people take a term or idea that has been around for a while and give it a new spin or totally different name, or they come up with a new feature and name it something illogical or non descriptive.
CGI is so full of these baffling choices in terminology its a wonder anyone knows what anything does.
Especially annoying is when the description of what the term means is as unless as the chosen name.
It still kinda makes sense if you read it slowly...
" iray rendering is based on deterministic and consistent global illumination simulation algorithms that converge without introducing persistent approximation artifacts.”"
possible translation?
"Iray's global lighting doesn't create artifacts as it approaches completion" :)
The new Iray surface parameters have names that seem overly confusing too. It was easy to experiment with 3Delight parameters, but now it's difficult to know what the parameters do, and sometimes multiple settings have to be set properly to see even a slight change. True Iray surface presets are going to be extremely valuable.
You can say that again!
In computing, rather than solve an equation/algorithm using the formulas, you have to do something called numerical approximation, that is you convert something like a differential equation into a series of successive additions, each one getting a little closer to the answer. But it's like moving halfway towards a wall with each step, you get closer with each one but never quite get there. But that first step is large, the next not so much, the third really kind of small and the fourth is tiny. Some threshold is set to say when it's "close enough". You never get to "done" but the algorithm has a way to say that more calculations are sort of pointles because they make almost no difference. At that point it is said to have converged on the solution.
Based on the above, there is some global illumination metric that is being used to say that it is "close enough", perhaps the change in the luminance of each pixel, averaged over all pixels or something like that.
convergence like crossing the streams? :lol: when Han Solo meets Indiana Jones
Trying to simplify it in my mind, so if say the target is 5, the formula spits out like a low of 1 and a high of 9, and each round it measures some metric's difference between them? Something like that?
Each iteration Iray looks at how much has changed since the previous iteration. When the amount of change is below a certain threshold Iray decides that it's "close enough".
Can I nominate this for "best comment of the update"? I'm struggling my way through figuring out what all the new bells and whistles do, but I'm hampered by not knowing what questions to ask, and whether this bell is more significant than that whistle.
It doesn't help as much as DAZ probably hoped to just say "look at the online manual". I can't find what I'm looking for, because I need to know the meaning of the name of what I'm looking for, before I can try to look for a link in the right place.
(Did that make sense? I thought it did when I wrote it, but now I'm not so sure.)
I am an analyst programmer, with over thirty year's experience of the dark and evil arts, and even I had trouble figuring out what that hefty chunk of prose means.
Cheers,
Alex.
And I blamed my status as a 2nd language speaker for being lost
But what is changing in each iteration to make results different?
Thanks. :) I've read the manual DAZ provided and also Sickleyield's deviantArt journal regarding these parameters, but the biggest sticking point for me is just how so many functions have to work in unison with others for anything to happen now. You don't just turn one dial to get specularity now, it takes an understanding of top coat settings and refraction and fresnel and whatnot to get it right.
In time, we'll probably be able to piece together settings from various products and use them on whatever similar figures and objects we need. For now though, it's a pretty confusing mess for anyone who isn't already very familiar with PBR parameters.
But what is changing in each iteration to make results different?
The characteristics of each pixel -- as you go along, the amount of change in each iteration decreases, so the amount of benefit from another iteration becomes smaller and smaller. You're never going to reach a point where it is "complete" (i.e. another iteration produces no change) so you eventually have to declare the improvement from another iteration to be too small to be worthwhile.
Very informative... unless you wanted to know what the Top Coat Weight means, and what it does apart from open the extra parameters. :-/
The same thing goes for the section on Base Bump, which is what I was originally looking for:-
In this case, why isn't there a min/max setting, and 0 - 50 what?
The few times I did tests with bump maps, I just used the live render preview and turned the bump value up until I could see it. Maybe 0 is the equivalent of -1.00? I don't know.
One thing that still puzzles me is having to set a negative value for SSS in order to determine light bounce direction. Subsurface settings are really confusing now compared to how they were in 3Delight. It would be nice to get a large package of subsurface presets for Iray like the one Age of Armour (I think?) did for the SSS shader.
The characteristics of each pixel -- as you go along, the amount of change in each iteration decreases, so the amount of benefit from another iteration becomes smaller and smaller. You're never going to reach a point where it is "complete" (i.e. another iteration produces no change) so you eventually have to declare the improvement from another iteration to be too small to be worthwhile.
Maybe I'm not asking the right question.
How or why does the characteristics of each pixel change. Grinch invoked Zeno's Paradox earlier in the the thread, but to get halfway to the wall, you have to know where the wall is.
In theory (and simplifying it down) there's a formula that's x * 2 = pixel that's done in rendering. and then the next iteration, the delta between whatever quality of pixel is measured to determine the convergence.
So where my question is...how is it figuring next X. If X=10 for 100% convergence, does it go {5, 6, 7, 8, 9...} or is it going {5,15, 7.5, 12.5...}... I would think it has to be the latter, because if it was the former..why doesn't it start at 10.
Maybe my question is..Why is convergence used, rather than hitting the final value..that's what 3Delight does.
You don't actually know what the final value is -- all you know is that each iteration should be a closer approximation, and that the amount of change from one iteration to the next decreases as the number of iterations increases. So you measure how much changed in the latest iteration, and when the amount of change is small enough you declare it close enough, that it isn't worth continuing for such little improvement.
So it's using the equivalent of the {5,15, 7.5, 12.5…} method to find 10 ?
Since we don't know what the actual algorithms are, and Nvidia is unlikely to share them for fear of revealing their secret sauce, we can speculate until the proverbial cows come home and still be no closer to any precise understanding of what is going on during a render. The explanations available, like that posted earlier, read like technobabble not because of a failure to communicate clearly, but because of a deliberate attempt to obfuscate in a seemingly impressive manner. It was probably written by a marketer, rather than a programmer, who represents the commercial equivalent of a political spin doctor.
As for convergence, as others have said, that logically involves approaching an acceptable value, bearing in mind that we're talking about a great many complex calculations taking place under the hood. I believe that it is wrong to think in terms of absolute values, though, for example a target value of ten. Instead, the engine is looking at the deltas - the changes that are occurring as the calculations progress. Those changes will be large initially (and the magnitude will vary depending on the size of the render job) and grow progressively smaller as the process nears completion - in other words as the delta between iterations approaches zero. Given that the delta will decline as this happens, the approach to zero will slow until there is no practical reason to continue given the diminishing returns, and the render engine programming therefore is designed to call it quits when little further progress is being made.
So how does the render engine know if the point at which the changes are minimal is the point at which the render is nearly perfect, some might ask? Well, it doesn't. It simply knows that there is little point in carrying on, and if the results are insufficient from the user's perspective, the user needs to look at changing up the parameters that they have specified to achieve the desired results. The render engine is dumb as a post and is merely following instructions, part of which are hard coded, and part of which are specified by the user. When it throws in the towel, it is saying "that is the best I can do with what you gave me".
Do I know this all for a fact? Nope. But it is what I believe Occam's Razor would suggest.
Old topic, but confusion is left. Let's see:
When rendering a 1920x1080 image, the final image begins with completely white noise of 2.073.600 pixels (1920*1080). %0 converged.
Then Iray tries to shoot "rays" from light sources to the objects where your camera is facing. Every hit pixel on the scene is converted from "white noise" to "illuminated" pixel.
Thus, Convergence percent = Illuminated pixel count by rays / white noise pixels (pixels yet to be hit by rays).
Edit: Also a single pixel's proper (subpixel) illumination happens when it is hit by set of rays with different angles. This depends on the "rendering quality" slider.
Due to how surfaces reflects or absorbs light in the scene, the likelyhood of being hit by the rays will be harder/easier.
For example, if the scene has "dim" light(s), it will be harder to illuminate these surfaces where camera is looking. Because most of the light will be absorbed before illuminating pixels on the scene.
Also, the higher the convergence, the longer it will take iray to find and shoot correct angled rays from sources to those white noise pixels.
I hope this makes sense.
Oh well. When I hear "convergence" I always imagine the point where the lines of fire of the wing mounted guns of a WWII fighterplane meet and cross the straight line of sight of the pilot, if seen directly from above or below.
The render starts blank.
Iray traces paths from source to surface to surface until it stops bouncing (I think that is handled by soem kind of energy threshold). That pixel is then coloured. Other paths will also resolve to that pixel, with different colours depending on their light source and which other surfaces were involved. Iray combines those results to keep updating the colour.
Convergence, as I understand it, is a measure of how settled the pixel is - initially it will vary a lot as new paths affect it, but over time they should just be slight variations on its current colour and when the chnage is "small enough" it will be counted as converged.
Yes
Broadly yes - dim isn't the issue, though, but rather how likely a light path is to hit the surface in question; a bright point light will covnerge more slowly than a scatering of dimmer lights with a lower total luminance.
Again, it's not a matter of hit-and-done - it's a matter of being hit enough to settle on a final value. Higher convergence requires more pixels to be deemed converged, the quality setting determines how settled the values have to be to count as converged.