Adding to Cart…

Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
I've been a tad under the weather lately. Before I run off to empty my nose, again. What is "REYES"? Spanish for 'Kings' or 'Sir name'? or this thing... (look of being completely and utterly lost)
http://en.wikipedia.org/wiki/Reyes_rendering
Is it something I'm already using and don't know it, or something that renders similar to 3DL in studio?
Your test results Kettu are interesting, and quite thought provoking. I had started to do some tests with 3DL built into Studio, so far without any maps for a base line, and the results so far are very interesting to say the least. I also was not looking at it from a stand point of what shader or render engine is better, I was just trying to figure out what was making HD figures SSS precompute delay so impossible to work with. So unlike the very valid and good test Kettu had done, I was looking at how to make a simple object cylinder as much like G3F as possible, and still be able to eliminate one thing at a time (like the number of SubD levels, or the seams between surface zones, etc).
That's the basic mode 3DL renders in...
Of course, most people use RGB opacity maps for all surfaces using opacity mask (grass, leaves etc). Even 1 or 2 % will be a big difference once you have lots of them in the shot. Particularly if you then enable reflection or any other UE2 mode other than AO. Did some test with IDL and bounceGI modes yesterday. For opacity map surfaces (like leaves), diffuse bounces needs to be as low as possible. I'm guessing that's why omni included that diffuse ray trace depth override, though it didn't work (with your script). Haven't tried running with it set to 1 or 0 (zero) yet with IDL not using your script though.
Omni's bounce limits in US2 seem to only work with his light shaders in the default scenario - UberEnvironment2 and UberSoftLightKit. They are not the standard 3DL trace depths which are done via RiAttributes, they are inside shader code. Since we have no source code of his shaders, I have no idea what he did there - some clever message passing most likely, but it's impossible to reverse-engineer.
I have included standard 3DL ray depths in the controls for each shader of mine where they are applicable. They are RiAttributes, so they are set per-surface, which means a given surface can have higher diffuse/specular bounce depths than any others - as long as the value is below the total max ray cap in the render settings (which is a RiOption = per-scene).
Yes it's this thing and you are using it whenever you are not using the "progressive" button. It's 3Delight's original architecture.
Renders Everything You Ever Saw - REYES. https://renderman.pixar.com/view/performance
And any other Renderman compliant renderer basically.
I usually render with progressive on. In the cases I tried the "REYES" version I had the feeling of it not "finishing" the render, with areas that were more precise in the progressive on mode. I'm using th build in version from DS4.8, probably makes a difference in the 4.9 version?
Example?
The only thing I can think of is that the filter and samples are set strangely in REYES mode...
To elaborate a little...
Progressive mode uses the Box filter and 1x1 filter width. This overrides any other setting. The default for REYES is Sinc and 6x6 width. Others can be less 'sharp'/distinct (Gaussian can be rather blurred). So a sample and what the settings are would be the place to start...
REYES shading rate is 1 by default. For the "smaller" resolutions we usually render at, this is too high... Could be the reason why the renders looked "wrong".
Actually it's 2016 already, and REYES is a thing of the past. Unless we're only doing certain types of NPR that have no raytrace calls at all, the raytracer will be faster.
REYES, DeepShadowMaps and non-linear workflow...all ancient history.
Had to rebuild the scene due to the Mantissa bread model I used. That single big bread model was 20 Million poly alone. It and the complete scene I used previously built with Dream Home was simply too much for my 4 GB machine. Here's a render with a much smaller scene, but this time there's practically no surfaces with opacity maps.
Renders with occlusion samples at 512, ray trace depth at 12, shadow samples at 64, pixel samples at 6x6 (due to the DOF) actually came out much faster than with the older scene. I think most of the renders I did with this one only took about half and hour (with UE2 AO mode).
Beautiful! And I think it could be even faster when the specular bounces are separate from diffuse bounces, and when only the refractive surfaces use the 12 limit.
Not possible since I'm not using UE2 indirect light or bounce GI. All that is plain 'old school' lighting.
And I do want that much specular bounces for glass and metal. The others are lower of course.
Used about three linear lights - one diffuse/specular, one specular only and one diffuse only. Technically, it is possible to ditch all of them and go bounceGI mode (maybe with just two bounces), but that means your script needs to be available for all users first. From experience, depending on max distance, IDL/bounceGI render will be slower. With that last indoor scene I used for testing UE2 and DS 4.9, the render time was around 30 to 35 minutes. If I remember correctly, pure AO was around 10 minutes.
I'll run some test to see what's the render time with IDL and maybe bounceGI.
Edit:
Pure AO plus direct light was around 21 minutes. IDL with direct light at max distance of 30 is around 30 minutes. So roughly speaking, IDL takes about 50% more render time or more. IDL with max distance of 100 was 45 minutes.
So you made sure to cap the diffuse depth via US2 controls? It's been a while since I tested UE2 behaviour in 2+ raytrace depth situations, but when I did, it slowed up even in AO modes when the max depth (and consequently max diffuse depth) was over 1.
It just seems to me that your machine should render somewhat faster even with 512 AO samples.
As I understand it, indirect diffuse max ray trace depth doesn't do anything in UE2 AO mode. It may help with UE2 IDL or bounceGI setting (without your script), but I really don't want to run my PC a whole day like mjc1016 did to test that out.
Technically, yes it should run faster with occlusion samples at 512, but I am using specular ray trace depth up to 12 and most of my materials (outside of fabric and grass/leaves) have reflection enabled. Yes, also on the mostly diffuse bread and donuts.
Granted they're capped at 1. Only glass and metals are set to 2 (max ray trace depth of the renderer). And all those reflections have some blur on them (at 8 samples). I find enabling reflections on everything allows you to pick up indirect specular nicely.
Tested out some more IDL combinations. Max distance was set to 6o for both. The difference is the ratio between UE2 intensity scale and IDL strength. Both rendered in roughly 40 minutes. For the first, IDL strength was 50%, while the second one was at 100%. Mostly did it to see how much you should subtract from the UE2 main intensity scale to keep illumination levels generally similar. From the looks of it, around 18% (in gamma space) per 50% IDL strength.
Btw, Here's a good video showing the material building workflow. Granted it is with RIS.
Good example really - a matte dieletric, a smooth one and a painted metal (or metallic paint).
Ah, so there's a lot of subtle blurred reflection, then the render time makes sense =)
And what about the mysterious third attachment? =)
Speaking of physics... I found a useful recap of the most important ideas in Naty Hoffman's SIGGRAPH 2013 "Physics and Math of Shading" paper: http://blog.selfshadow.com/publications/s2013-shading-course/
So for those who haven't yet had time to finish the video lectures, this could be helpful ;D
That's the pure AO render, for comparison's sake.
Aha, I see now; and the distance for AO is 60 as well?
I think it was almost twice the value I used for IDL.Yes. 110
Woot!!
Yes. 150 GB of textures.
Um...yeah...3DL is so slooooooooow....
Looks like an interesting bit of software...
http://imageengine.github.io/gaffer/index.html
Reading some of RIS docs. This tidbit is interesting and inline to my opinion about opacity masks (and Kettu's opacity tests).
Presence
From: https://renderman.pixar.com/resources/current/RenderMan/PxrLMDiffuse.html
And I do like how they implemented front/back shading. So you can have different texture/color on each side.
I was thinking about possible reasons for that, and I guess it may be connected with more efficient culling when there are only "on"/"off" regions. Gradient transparencies are accumulated until the total opacity reaches a predefined threshold, and with 0-1 ones, in many situations stuff behind the fully opaque portion can be discarded.
Front/back shading is nice to work with when you have a working layering system... with our "static" shader interfaces in DS, the channel list gets too long and unwieldy when there are two options of each, so I decided not to bother with that.
Interesting
https://eheitzresearch.wordpress.com/240-2/
Neat!
...that reminds me that I haven't tested single scattering in 3DL yet.
Yeah, that combined with the method for rendering glint would be awesome.
Glint as in that paper with a render of beautiful red shoes? I think it should wait for OSL =) Should be better sampling there = less firefly potential.
And more potential to be 'cross renderer'...
This one: http://www.eecs.berkeley.edu/~lingqi/publications/paper_glints.pdf
Noise can be handled with a 'smart' filter. http://cvc.ucsb.edu/graphics/Papers/SIGGRAPH2015_LBF/PaperData/SIGGRAPH15_LBF.pdf
The times for the ground truth render are crazy high :D - 6 hours to 5 days?