Adding to Cart…

Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
Yes. That's pretty much the basic workflow.
Actually, they can get pretty close. The values you want are on the bottom of the page there. Some metal, like copper does have a slight diffuse, which is why actual reflectance at 0 degrees is slightly higher than the value near the Brewster angle. At F0, Copper is 0.83204. At the Brewster angle, it's 0.71533. That means the diffuse strength is the difference between the two - 0.12. It is a very smooth diffuse if you're using diffuse with roughness, be it Oren Nayar or the clay model used by UberSurface/UberSurface2.
And like mjc1016 wrote, it's not 3delight. It's US and US2. 3delight can do more accurate fresnel with simple and complex IOR. Just look at the 3delight for Maya metal shader.
Specular highlights are actually very blurry reflections. So as you decrease roughness (making the surface smoother), the specular gradually becomes a reflection. Here's an explanation from Neil Blevins - http://www.neilblevins.com/cg_education/reflection_highlight/reflection_highlight.htm
If you read more about fresnel, IOR and the more technical stuff, the angle of refraction is actually the same as the angle of reflection - https://en.wikipedia.org/wiki/Fresnel_equations
So, although it is called index of refraction (IOR) the angle works for reflection as well.
Ahh, ok, so it's 'ok' to use some diffuse. That sews up a problem I kept running into, where these sorts of metals end up looking almost pure black which... doesn't look right.
So, basically, diffuse/specular is a high level approximation of microsurface detail and reflection. I think I can wrap my head around that.
Technically, refraction doesn't exist...
It's actually a combination of reflectance and transmission. So, Fresnel's equations actually DO apply to everything.
Technically, nothing is solid, man!
Belgium is a made-up country!
(This thread is really blowing my mind. hee)
It's pretty simple, really, the math only deals with reflected or transmitted light, everything else is just description of the results....transmitted light that is deflected...refraction. Opaque...more light is reflected; transparent...more light is transmitted; translucent...somewhere in between. So, everything can be described in terms of Fresnel's equations...
For the curious, drawing on the lessons here...
Stonemason's Prototype. The original version is on the right, the modified version, relying more on reflection, on the left. Now, the right isn't 'worse' -- the armor looks very rough, like painted brushed metal. The internal parts, though, look a bit plasticy.
I think the best version would probably use the original plates but reflection-heavy everything else.
Painted or oxidized metal tends to have more dieletric properties than pure smooth metal. Since physically, the metal is covered by a bonding agent and a coat. For testing, I usually use objects that you typically see in real life. Easier to match how it looks that way. Stonemason also did some renders with that model on some promos (with iray). So you could use that as a reference as well. Though he used a red color, if i remember correctly.
Grab my Metal presets I put up on ShaeCG recently, Will, they are more or less 'brushed' metals...just lower the 'reflection blur' to make them 'shinier'.
Physics is inescapable, so it may be worth it brushing up on what your high school teacher used to talk about... There's a playlist consisting of video lectures by some dude, quite useful IMO:
That may be a tad much for some, tho it is a good vid. About as good as "The Mechanical Universe" episode number 40. "Optics".
I remember one stumbling I came across a few decades ago, was while I understood what the 'formula' was doing with the numbers (this times that, to the power of this), I had no idea what the letters and squiggly symbols represented (or what there called, lol), lol. To put a small twist on what Prof. Goodstein said in an earlier episode, "here is an incredibly profound formula".
Velocity equals italic eff times Lambda.
See, your not impressed. Because I have not told you what "italic eff" and what "Lambda" is, and it doesn't mean anything to you. The velocity of a wave formula is the same, unless we know what the frequency (italic eff) and wave length (Lambda) is, the formula doesn't mean anything to us, lol.
Even then, some formulas can be rather complex and difficult to visualize how they apply to the way waves behave and interact.
"The scientific papers of James Clerk Maxwell" is a very good book. It is not just a flood of complex formulas leaving you completely dumbfounded as to what the letters are and completely lost as to what the formula is doing. Back then, it was common practice to list out what each letter represented in the formula, and even include graphical diagrams to show what there doing.
P.S. Prof. Goodstein, yes "Dr. David Goodstein". In some cultures, Professor (Prof.) replaces Dr. in the Sir name.
Thanks for reminding me why I decided NOT to continue as a math major...
and concentrate on doing live and recorded audio instead.
And why I gave up looking in to a lot of stuff. I just didn't want to spend the time sifting threw hundreds of unrelated definitions to a word in a paper for the one explanation of what something is in the paper. In fairness, sometimes manuals for recording stuff can be as bad, giving you little understanding how to use an item, lol.
Kettu. I'm listening to DrPhysics speak, and I keep expecting him to jump to seemingly unrelated tangents, lol. His voice reminds me of a BBC host from years past.
JLB - I’d like you to meet a few people who were in or near New York City on a November evening over a decade ago. And the reason I’d like you to meet them is because they all have one thing in common: they were all brought to a sudden, catastrophic realization of how vulnerable they were. How dependent on one aspect of that technological network I was talking about. Because of what -- this -- did to their lives.
That was a really good series. "Connections", James Burke.
yes, that's it. That seemingly little piece of technology. A small hint of how dependent we all are on the trappings of technology, and how many that depend on it don't understand how it works. Oh, what is that device James is holding, it is a circuit breaker (control relay) from a power station at Niagara falls 'Adam Beck 2' from 1965.
Now I think I know why Studio doesn't have OpenVDB...it has go to be the most annoyingly frustrating piece of fornicating fecal matter to ever be packaged when it comes time to actually build/compile it...who in their right mind, in this day and age, with something so complex uses a non-configurable build system (makefile with NO configure other than manually editing it)?
After an annoying amount of time...tracking down dependencies of dependencies that you only find out about because the build failed...AGAIN...I have a full working OpenVDB installed...can't do much with it at the moment, but it works. Yeah, I grabbed a couple of the samples...played around with them with the tools from the package, made a couple of quick renders with the built in renderer, etc...
On a side note...the standalone 3DL has been updated...
Hmm, after looking at the OSL notes, I'm thinking using 256 samples IS on the LOW side.
This is with Occlusion samples at 256 (plus shadow samples at 64, reflection blur samples at 32).
Richard Feynman had some great "introductory" lectures-condensed-into-a-book, too, but I don't remember if it went beyond geometric optics unfortunately...
I don't know about the rest of the world, but here in Russia your article won't get published if you don't provide the breakdown on your formula designations. And plots/diagrams are "the thing that your reader actually cares for" (tm). Coursebooks, same.
Isn't acoustics a damn annoying subject to study? Or clients never ask for a scientific breakdown of all the why's for mic/speaker setup? =)
If it works the way I think it does in the OSLtracer, then definitely on the low one. Have you tried it in Maya?
The render is great... but the bread is the giveaway. Here's when OpenVDB could've helped, for that spongey texture. Oh BTW, what about pixel samples? There are jaggies between the tiles, which could be "built into" the map or could come from undersampling and/or filter.
I've met very few musicians who would actually be able to ask...let alone understand...they just don't think that way. And most of the others...'that's why you have the job...don't bother me with the details.'
And when you get down the physics...it's pretty much the same, but with much smaller numbers...20 to 22000 Hz is a lot smaller than GHz and higher. But physics is physics...
I've been doing 3D for years, and yet my hand-drawn stuff features amazingly bad perspective. I figure it's a related deficiency.
30 yrs ago...much more was done by hand. Yeah, it is tricky...because as the frequency goes down and amplitude of the wave goes up...so does the physical size of anything 'working' with it...
Now here's an interesting thought for you...think of the pipes of an organ strings of a piano and reeds of a clarinet as shader networks...or waveguides.
A tricky thought. I'm a bit too indoctrinated about harmonic oscillators.
What's a lot like shader networks is synth programming, where you string oscillators together and put them through certain filters to match the sound you have in your head.
It's a very low poly model and low poly resolution texture.
There's this of course - http://www.themantissa.net/blog/bread-pack-01
But that's just a test shot. Here's another one.
The patio door glass is especially cool, and that little cupboard/chest/no idea what the right word is =)
How many specular bounces with all those refractive surfaces?
And thanks for the tasty link =)
The right synth/soft synth is EXACTLY like ShaderMixer...(or is it 'wrong').
But all of that is just the digital conversion of the analog/physical 'network'. Hard to think of a guitar as a 'shader', but reduce it to math and formulas...and well, it is.
The prop is one of the LXV Furniture. It's the chiffonier. http://www.daz3d.com/l-xv-furniture-pack2.
Looks cool, but the texture is sadly kinda low in resolution. Really old one so not surprising.
As per usual, I've setup my maximum ray trace depth at 12. All the surfaces, outside of fabric and leaves have reflections enabled, though only the metal and glass have ray trace depth surface properties set to 2. So technically, only rays bouncing off those surfaces will go that deep. With the DOF, i think render time was quite high - around 1 and half hour.
oh, RIS 21 is almost out. Really tempting too. Instead of one license, you get five now (at the same price).
By the way, Kettu. There's an idea I'm wondering about these past few weeks. I've seen DS shaders that's world space aware (ie http://www.daz3d.com/seaside-shore-shaders etc). Is this DAZ specific or is it general RSL? If it is general RSL, I wonder can it be used to 'automate' some things. For instance, adjusting SSS shading rate of surfaces/objects depending on how far away they are from the camera. In the same line of thinking, I wonder if it's also possible to tie shadows samples of lights to their softness.
Then there is a question of how much gain this will really give. A shader does not know anything about geometry apart from the shading point P, and the renderer does the SIMD execution. If the object gets too different shading rate values for its different parts because one is close to the camera and another is far... won't it downgrade performance?
Again, I simply don't know. Besides... There are times when "oldschool" SSS is useful (especially with bad geometry), but RT SSS is free from shading rate problems... it has its own sometimes, of course.
You know, I think a similar optimisation could be performed with existing shaders but DS scripting: there is a script by mCasual that creates a null at the object point closest to the camera, and another script by a different person sets camera focal distance using that null. A script could be written that would adjust SSS shading rate instead. It could enumerate over the surfaces using SSS-enabled shaders. With a bit more scripting, it could even run automatically before each animation frame is sent to the renderer...
As for lights, you mean make the light adjust its own samples depending on how far its shadow falls? This would require passing messages between lights and surfaces back and forth, so specialised shaders would be needed for both. I'd say it's just easier to use light shaders that have their own sample setting each, like dzLights or Uber or whatever, just not the DS default ones.
The problem with special shaders is that you have to do EVERYTHING all over. There are shortcuts that DS has taken with the default shaders and lights that aren't really documented and aren't available in Builder or Mixer (basically they are part of the default bricks and therefore blackboxes), so you have to code in just about every contingency...and reinvent the wheel. Not that it's not doable...just it is a lot of work for very little, if any, gain.
Yeah, like that "accept shadows" thing.
Old ShareCG freebies by Sixus1 (HER and her stuff) and 3-d-c (the SF corridor/room series). There are four polys representing 4 light sources in the room model; they were assigned to the same surface but I separated them with the geometry editor and used a different light colour in each. The backlight comes from an invisible plane ("phantom" on).
So, five path-traced area lights all in all, plus a GI envlight. The EXR took around 23 minutes to render on my laptop; 12 pixel samples - DOF is quite smooth this way IMO. All the shaders are mine but her glowing eyes which are UberSurface set to ambient. You can see the ambience bouncing around her eye sockets nicely.
EDIT: replaced JPEG with PNG, I forgot the forum software mangles jpegs this bad.
A lot of noise on the skin.
I was thinking something more simple - adjust shadow samples depending on the softness value set in the parameters. Haven't really thought about adaptive sampling between near and far contact points.
Areas in shadow will always be more noisy, especially in tests when using only 64 GI samples and 128 SSS ones. Oh and BTW, if you're looking on a phone/tablet, chances are that there are extra dither-like artefacts introduced into PNG images.
For that, if it's just a static per-light param, I think it's more efficient to use DS scripting, too (and lights that understand individual shadow samples, of course). So that instead of the light shader doing more calculations at render time, a script would iterate over lights in the scene before it's sent to the renderer, and adjust samples.
Now, the "On/Off" opacity maps vs hi-colour ones:
The theory about 1bit ("On/Off") opacity maps rendering faster seems to be true.
I set up a scene (see attached render) with a spotlight pointing straight down (raytraced soft shadows; the intensity is low so it's hard to notice, but the calculations are there) and an UE2 with a trace distance of 100 units. The top plane is 80 units from the bottom one, so they are all raytraced by the ambient occlusion. The four planes in the air are using the same opacity map: either a 1bit evenly filled with black and white pixels or an RGB128 grey one (hi-colour, not greyscale). Both maps are originally TIFF. If anyone wants to repeat the test, I will put them up for download.
I ran six renders of the 1bit map and six renders of the 128RGB map, first for the DS default shader and then for UberSurface (the free one). The DS Default shader is slightly faster to render in both cases. As is the 1bit opacity map.
The difference is in the tenths of a second for this scene, but overall it's about 2% faster for the "On/Off" map. Hard to predict whether it will scale exactly like this, but still.
DS Default shader
On/Off average for six runs: 9.0183 s
RGB average for six runs: 9.1967 s
Average gain, %: 1.96
UberSurface
On/Off average for six runs: 9.2883 s
RGB average for six runs: 9.4833 s
Average gain, %: 2.08
PS The tests were done in REYES because that's what the majority of the remaining DS/3DL users seem to use.