Everything being equal, is color black easier on Iray?
![i53570k](https://secure.gravatar.com/avatar/93c70c412e97281a19d1b031789745a6?&r=pg&s=100&d=https%3A%2F%2Fvanillicon.com%2F93c70c412e97281a19d1b031789745a6_100.png)
Not sure of how Iray processses light. If everything being equal does black or darker color easier for Iray rendering than white color?
You currently have no notifications.
Not sure of how Iray processses light. If everything being equal does black or darker color easier for Iray rendering than white color?
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
iRay renders a completely black scene very quickly.
Seriously iRay is a physical based renderer. This means it attempts to realistically model the way light reflects and is absorbed by materials. A scene with lots of mirrored surfaces is noticeably slower than one without. Beyond that what tends to matter most is the amount of light in the scene not the color of the surfaces.
TL;DR - Yes, but maybe not for the reason you think.
The color of the object actually doesn't make a difference. It's the object's transmission and reflection. To understand why, first understand how ray tracers work, which is exactly the opposite of how light works in the physical world.
In ray tracing, there's initially one ray that leaves the focal point of the camera, and passes through every pixel in the image, and into the scene. When a ray intersects a surface, the color of surface at that point is allowed to influence the color of the pixel. That's called a "sample". Then, the ray tracer will generate two "child rays", one reflected, and one transmitted. For a smooth and shiny surface, the reflected ray will bounce perfectly according to Law of Reflection (the incident angle is equal to the reflected angle, and this is why normals are so important), while a very rough one will essenentially go in a random direction. The trasmitted ray is deflected according the the IOR (index of refraction) of the material. I think more rays are generated when the rays leave the material as well. This is what makes ray tracing computationally expensive... each ray is split into at least two everytime it intersect a surface, and each of those splits into at least two, etc...
But all of these samples for each ray are ultimately discarded unless that ray is shown to have come from a light source. An actual intersection of area lights, or a statistic distribution for point sources (because the chance of a ray actually intersecting a point is very near zero). When there is such an intersection, all the samples are combined, and that's the color of the pixel that started this chain reaction at the very beginning.
So, if a material doesn't reflect much and doesn't transmit much, then that's fewer rays bouncing around your scene, creating more and more child rays that create more and more child rays that need to be sampled for their contribution to the their pixel's color. There's less to compute. If the material were black, but, say, still very reflective, the ray tracer would still have to create those child rays because the black color still has to be combined with the other samples for a given pixel.
Incidentally, that's why these objects are black; not becaus eof their color, but because the renderer figured out that it could stop generating child rays before any of them interacted appreciably with a light source.
Actually, a colleauge just pointed out that a renderer could, under certain conditions, figure out that it can stop generating child rays based on color, but its a relatively minor optimization; the material's properties still make the overwhelming difference.
Thanks for the above detail explaination. For some reason I thought ray tracing uses color specturm of the RGB value to calculate how much child ray bounces is needed instead of a pre-defined refraction value in the materials.
You sound like someone who would enjoy reading the Cycles Encyclopedia. It's not for IRay, but it is incredibly detailed so you could play around in Blender and satisfy your curiosity with simple experiments.
Sorry for plugging Blender again, but it's Open Source, which means you are guaranteed the right to learn how it works, and the community will happily support you to do so.
Don't forget refraction, back-scattering and caustics... (Atmospheric-scatter, when/if they eventually turn that on. AKA: Volumetric lighting.)
Ultimately, it truly depends on the actual shaders used in the "material". If there is no reflection, rafraction, back-scattering, normals, bumps, additional SubD levels... Then the color could, sort-of impact speed, both in a negative and positive way.
Less reflections from "black-holes" = More potential time needed for convergence. (You can't determine if a pixel is "done", if it starts at 0 and ends with a 0. Plus, the remaining surfaces still need to wait for "direct light", if nothing is "reflecting light", or "refracting it", to light it up from an indirect angle.)
Thus, it can be a catch-22, as well as being a double-edge blade, with or without sharp edges on them.
Just for the record...
Caustics are disabled, by default, because (although it is more true to light reflections and refractions), it is a setting that can easily turn a 20 minute render into a 20 hour render, to get that realism without enough "noise". {You can't do caustics and denoiser at the same time. Well, you "can", but it destroys caustics, acording to IRAY documentation.}.
https://www.youtube.com/watch?v=vlPpA18bmgU
Refractions/Back-scattering can literally double render-times, and easily turn a 20 minute render into a 2 hour render. (To truly get realistic results, you would ALSO need caustics on. Making 2 hours into 20 to 200 hours, for similar results, without noise.)
https://www.youtube.com/watch?v=qvzPvC_O-GM
Reflections can be fast, depending on how "scattered" they are, and if they also transmit color and intensity, or just scatter illumination. There are two factors there... One for the actual "light", and one for the "super-imposed reflection". (It's like a fast and cheap way to handle reflections, without duplicating every object in the entire scene, onto every reflected surface. It simply cheats the results, for speed.)
Camera focal blur, unrelated to actual "light rendering", but it is a highly taxing mid-processing method to simulate a realistic "blur", based on depths from a focal-plane. (Not a focal-point, which would be "real". For speed, they use a linear plane of focus.) Virtually, is is like rendering 5x the scene. One centered camera, one shifted up, one shifted to the left, one down and one to the right. The results of all four pixels being blended to create one output. {I said, virtually, because it is NOT actually done that way, it is the "effect" they simulate by a complex formula. But, in reality, that is the virtual concept. With only two eyes, we simulate that by rendering two physical images from two actual camera positions, equal to our eyes distance apart. When one eye looks at one image, and the other eye looks at another, our brain blends the two images into one. Only the thing we focus on, is "in focus", the rest, further from that singular point, becomes blurry. Thus, we get a stereoscopic image that feels almost like it is 3D, to our brains.}
Related article... (Not how iray does it, exactly.) https://catlikecoding.com/unity/tutorials/advanced-rendering/depth-of-field/
Colors... They are only important to what additional things are seen, where simple shade depth falls short. To even render a black and white scene, you still have to render the colors. Because the colors, combined, is what determines the actual shade, which is poorly limited to only 256 (absolute) levels of shade detail, normally, unless you use the canvases to capture the floating-point shade values in the exposure range. [24-bit color] is 8-bits per color R/G/B = 256/256/256 or 256-shades (0,0,0) black, (1,1,1) 1-up from black... (255,255,255) white. While [48-bit color] at 16-bits per color, will give you floating-point values, but only in two possible ranges... 65,536 for real 16-bit, and only 32,769 for photoshop's 15+1 bit colors. Even if most monitors can't even display that color quality level, directly. (Not going into 32-bits color, because that is what we normally see in photoshop now. 48-bit color is not something that most programs can even read, or display, but it is the current upper-limit of realistic data used to display 32-bit color and 16-bit color, through filters. I am excluding the "Alpha" channel, because that is not exclusive to all images and unrelated to color.)
https://en.wikipedia.org/wiki/Color_depth