Show Us Your Iray Renders. Part III
This discussion has been closed.
Adding to Cart…

Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
And Iray produces stunning realism!...
I let this one bake up to 50%...could have let it go more...but i felt it was not really needed...most of the noise was gone anyway...and she looks fantastic already!...
no postwork was applied...just straight out of the box.
best viewed at full resolution.
Yay, it worked! I tried it out with some quick alpha brush mech bolts using GOZ to Z-Brush and back. (sorry G2F, I know it ain't pretty) This is by far the best success I've had with displacement maps. I usually have issues with seams that are especially apparent when the figure is backlit like this. But it worked like a charm! I even put a giant bolt on the middle of the shoulder UV seam, and it matched up great. Figure is at sub-d 2 and displacement sub-d is at 3, working with no problems. This opens up a lot of possibilities for me, thank you for the great help! I owe you a Jack and Coke.
(edit - I cropped the photo because the bolts at the shoulder were just so ugly looking. A good test of displacement with strangely horrifying results)
Sorry to bring up an old post, but I've just gotten around to trying this, and I'm confused. When I try to GoZ the sculpt back to G2F in DS, it offers to make a morph, but if I uncheck update geometry and instead choose update materials, it makes the figure blank white and does not import my displacement at all. Is there a workup step between "sculpt in Zbrush" and "GoZ" that I'm missing?
At the end of the day.
Technical problem. I told y'all I was working on an image called LADY OF THE LAKE, which included wet fabric, wet skin, and lake water. The artist I was working with loved the image, but wanted me to make a very big one we could print on canvas. Very big as in 4200 by 6300, DPI of 350. Now, I did this twice. First I let it render overnight, but it was still really grainy, so I tried again--let it render 12 hours, or 43,200 seconds, quality of 20, 15,000 samples. It's STILL grainy as hell. Can anybody suggest anything? I thought about rendering it smaller and then blowing it up with Alien Skin blowup, which I have done before, though never with a project so large. Yet even then, I'd probably end up with graininess. You can see the problem. The upper part of her face is relatively clear, but the shadowed part, which probably has the reflections from the shine of the wet skin, is problematic. My partner might be able to put it through a noise removal filter, but I'm afraid that will kill too many details.
When I need the best quality render I can get, I set Max Time to 0 (it used to be -1, but DAZ changed that, thankfully,) and set Image Convergence to 100%. I let one render, 4000 x 5127 pixels, go on for almost 4 days, CPU only, to get the quality I needed. I may take a long time, but if you have a lot of dark areas, that extra 5% can make a big difference. I've never set Quality above 1 for anything.
When I need the best quality render I can get, I set Max Time to 0 (it used to be -1, but DAZ changed that, thankfully,) and set Image Convergence to 100%. I let one render, 4000 x 5127 pixels, go on for almost 4 days, CPU only, to get the quality I needed. I may take a long time, but if you have a lot of dark areas, that extra 5% can make a big difference. I've never set Quality above 1 for anything.
The settings mentioned by ACross are your best option to get the job done. Anyway it not solves your main problem which is the big size of the final output. I doubt that your book cover will be 12 X 18 inches, so I think this is the size to a promotional material, probably a poster. Your artist partner is using a rule from the printing industry which tells that the image resolution should be the double of the printing halftone screen line frequency, which means if want print an image using 177 lpi your image should have something like 354 dpi or 354 ppi. LPI (lines per inch) is an important measurment related to the way printers reproduce photographic images. The LPI is dependent on the output device and the type of paper. On the Web, LPI is not a factor because images display on-screen in pixels (PPI).
The Typical LPI for offset printing ranges from 85-133 lines per inch. The figures are much lower for screen printing and laser printing. High quality offset or gravure printing such as for glossy color magazines may go as high as 300 LPI.
In calculating the required resolution for an image, LPI (based on type of paper and printing method) X 2 is the most commonly used formula (i.e. 133 LPI X 2 = 266 required SPI).
If your computer do not have the necessary power to output an image at this size in a reasonable timeframe you can output your image with something like 3192 X 4788 pixels in size, which is 1.5 of the print halftone frequency. Someone can tell you that you will sacrifice the final printing quality, but I can assure you that is not something visible to the spectator as a poster is made to be viewed from a certain distance. With this resolution you will reduce the render time and will get a full resolution image to the cover and a very descent resolution to the poster. I hope this help you.
Nice artwork Damsel......I'm redering a scene now and it's running 58 minutes and 56 seconds.....and oh it's 0.66% done.....lots of reflective chrome and silk and other difficult materials in the scene. The grainyness in your shot is partly due to the darkness.
Trick 1: Cranck up the light and edit down in Photoshop
Trick 2: Go for a lower resoltulion, in my experience a well rendered CGI has twice the linear resolution compared to a digital camera of the same resolution (no bayer array in place in the CGI world). So upresing is no problem at all......
Trick 3: Use NeatImage to get rid of some noise.....it's somsething I use a lot these days to denoise some of my older digital photo's (sensors have improved a lot over the years and retaking shots is well, difficult at best).....i've also used it on some grainy renders from Cycles (no NVidia card so 500 cycles at 4000 x 3000) is all I can afford......a 5 hour wait is enough in my book......
Trick 4: Buy hardware......multiple NVidia cards (no SLI) will speed up the render considerably......
Trick 5: Download Blender and use Cycles instaed of Iray, both work better on Nvidia CUDA core then on Intel processors.....bu Cycles is a lot more efficient on the processor alone....however CUDA rules....completely!
Greets, Ed.......who's a 1008 itterations and 0.68% done! And 1 hour and 9 minutes are gone.....I love an i5 :-)!
WIP of a reptilian character from my novel. Still messing around with the gloss settings to get it looking right, but like how it's coming. Iray rocks!!
because you choice to not import your morph .. the morph is important part of the displacement and the displacement settings will be imported for the morph in general not the base
Sorry to bring up an old post, but I've just gotten around to trying this, and I'm confused. When I try to GoZ the sculpt back to G2F in DS, it offers to make a morph, but if I uncheck update geometry and instead choose update materials, it makes the figure blank white and does not import my displacement at all. Is there a workup step between "sculpt in Zbrush" and "GoZ" that I'm missing?
Playing with animation
3 frames per minute with GTX 760 .. total 1560 frames
check out the IRAYMAN lol
https://plus.google.com/+Mec4D/posts/1pTRYAenSfE
Did a render for LlolaLane's Render a Month Challenge so for a very quick and dirty test of Iray I re-did it, with minimal changes (just swapped out the cigarette for the one from the Fatale Noir set, and slight pose tweaks to account for it). The smoke remains a prop from Smay's Fire and Smoke set.
Iray changes include the lights and setting the ash part of the fag to be an emitter. applying Iray Uber to the smoke and tweaking, and changign the tear to be Iray Uber, thin water.
No offense intended but much of this is in your head. I'm not quite sure how to state this. I've tried to explain this to you in the past I believe, unless I am getting you confused with someone else.
I think it unwise to try to draw comparisons between unbiased rendering engines unless the lighting and materials are set up the exact same way for both renders. Same environmental setting, same lighting conditions, same camera placement and viewing angles, same camera type, same exposure and other settings all need to be exact or the comparison is pointless. For example, I'd assume the reason why the eye look different in Octane than in Iray is because in the Iray render the "sun" is in front of the model so it catches the eyes in highlight. The Octane render by comparison seems to have the sun behind his head, so his eye remain in shadow.
If you are looking for any sort of render engine difference I doubt you will ever find one that is in any way meaningful. Iray is equal to Octane because they are both unbiased so there is a very limited range of divergence allowed between them. They simply don't have the option to diverge too greatly. If you provide each of the them with the exact same parameters to work with they MUST produce results that are identical or you can take them to court and litigate them. The key here is providing EXACTLY the same inputs, and that means knowing how to translate the terms in Iray to the terms in Octane and the painstaking task of manually ensuring the aeverything is perfectly aligned which is an incredible amount of work. Some of the terms are the same and values are also similar, in other cases not so much and that is why you see so few legit render comparisons.
For example, there was at one time a lot of talk about how LuxRender was somehow more "accurate" than Octane, as was often touted by Pauolo and some others. The argument was that Octane was faster because it was less accurate and that Luxrender was worth the extra time for the added quality. But the truth is that to a major extent all of the unbiased engines are the same and produce the exact same results if you can find some way to properly match the inputs.
So I'd say that time spent comparing engines is essential time that is wasted unless you set out from the start to compare them, and you build the scene toward that purpose from the beginning. At this point of unbiased saturation we are better off focusing on how to work with unbiased engines in a general sense, than we are for looking for differences in output. Because again, any differences we discover will almost always come down to an issue with the user, not with the underlying math of the applications because it is all the same or it wouldn't be unbiased.
Both renders are nice and realistic to equal degrees in my personal view. I actually prefer the Octane one for the environment, but I love Iray as well so for me it is a win win.
Nice render Simon, looks so much like the actress and filmstar Bette Davis. Maybe, just a slight twist of the hand to see the cigarette detail better (only a thought) ;-)
Edited to note : Just seen your 'Render Challenge' picture has the cigarette in a more visible position. Looks even more like Bette Davis in this Black and white image. Very nice.
Cheers :-)
Thanks for mentioning it , people do not realize that , the only difference they will see are kn the scene and material they set up and that is
Most of people render in Direct light in Octane what is Interactive for Iray so another fall .
To have it well done, all materials need to be accurate PBR materials with the same value and not other, then truly you will see the results .
Cool
Looks like Grig from The Last Starfighter
This is why I think aiming for "perfect photorealism" is overrated. Most photographs we see and want to emulate are "Photoshopped" (they even managed it in the 30's). Photoreal doesent mean it looks good either, there are plenty of photographs that look like complete crap, I know I've taken a lot of them.
There was a thread on one of the blender forums, someone made an image that looked completely photoreal, unfortunately that photo was a crappy polaroid with flash. It was an excellent technical experiment, but not a compelling image.
As far as I'm concerned, don't worry too much about whether something is perfectly photorealistic, just worry about whether it looks good.
@RawArt Thank you! looks great and dead simple to use.
So true...Kamion99!...at the end of the day it just has to look convincing and good.
Yes and no. "Looks good" is a rather subjective argument and has more to do with the way the individual artist has trained his eye to expect certain things and not to expect others. One thing about the "real deal" is that it always is surprising. I always "think" I know what a blade of grass looks like but every time I observe one in real life I always discover something new I'd never seen before.
To a great extent many of us suffer in PBR simulations because we have our "eyes" trained by bad habits gained from years and thousands of renders made with biased engines. We have to re-learn right from wrong so that our expectations are in line with reality.
For example there was a poster a little bit ago asking about eye reflections, wondering about why they were not visible in the Iray render, expecting that the reflections were supposed to be mapped onto the corneas UV map. Another poster explained that painted on reflection maps were a legacy cheat often employed in 3Delight or Poser but that such would be very wrong in Iray. The user explained that there needs to be a prominent light source in the scene for the eyes to reflect, just like in real life. So basically, the original user was expecting something due to bad habits gained from experience in rendering but not from real life experience. Because as another posted also stated, in real life you rarely see distinct eye reflections, yet in renders we've convinced ourselves over the years that those reflections are paramount for realism but in fact they are not.
I think it is okay to let the render engine to teach you from time to time, just like you'd let a real photograph teach you. You'd never argue with the realism of a real photo, and it should be similar with Iray. Iray will do what it is told even if it doesn't look "right"
Realism and art are a tricky combo to begin with. I drive past fields of grass and trees every day, but rarely have I pulled over to really OBSERVE the plants in detail. Because I know they are real my unconscious mind assumes I've already seen plants like this before, so my eyes tend to look at other more interesting things that are "new."
In many of the most realistic renders especially in Iray and Lux and Octane..., viewers tend to assume the image is entirely real and so they don't tend to look very closely at it. Only once they are made aware it is fake do they then begin to appreciate the attention to detail that made the image appear to be realistic in the first place. Realism once it reaches a certain degree of accuracy can in a sense be quite boring because the viewers will assume there is nothing here that he hasn't already seen in some way so they lose interest quickly. Dedicated art however, that doesn't conform to physical laws, might produce more interesting images because so much of it will be things the eye doesn't see in real life, and that "extra headroom" might even provide for more opportunity for "commentary" by the individual artist.
All this to say that one needs to decide which they value more, that it "looks good" based on quite possibly misguided expectations, or that it is better to value accuracy, which has nothing to do with person preference but with technical attention to detail. Realize that looking good is based on personal biases, while accuracy references universal physics. One leaves room for you to tweak it, the other does not.
Fun fun!!!!
...pretty nice.
Cool
Looks like Grig from The Last Starfighter
ha! I can totally see that :lol:
Just anotherday for Fiery in the restaurant of the Hindenburg! Took about 2.5 hours to render while I was discussing 3D with some freinds.......I must say it finished on the 5000 cycles max I had set......now I'm using a i5 bread and butter computer with an AMD videocard that is nice enough but does not realy contain to many CUDA engines......in Blender Cycles that is not a big deal.....I can render 400 cycles deep and have 4000 x 3000 pixel print ready in 5 hours (of the same scene).....and then I have about as much fireflies as this one (which is 1000 x 1000).......but the sheer quality of the materials in Iray is breathtaking......so build in Blender, UV unwrap in Blender, testrender in Blender.....then .obj en of to DAZ and Iray........wil invest in the biggest bad ass GTX my money can buy (and my money is to tight to mention)........but I'll squeeze out a 980 because the materials oozzzzzzzzz, the silk is silky, the chrome shines like a 58 corvette bumper polished using Aaron's beard and the red velvet........damned where's Marylin!
Great stuff..................
Greets, Ed,
While not all photos may have reflections in the eye I've yet to see a person in the real world that didn't. Perhaps not strong, but they're always there, so I would argue that a result that leads to no reflection in the eye is incorrect even if everything seems physically correct.
The real thing is that while the raytracer is physically correct the environment still isn't, unless you're modelling the whole chunk of the physical world around your scene including the atmosphere (this is why people like hdrs so much as they can get fairly close). But we obviously can't do that so we cheat a little instead.
Really nice one!
He looks like as if he had lost his biker suit at full speed :)
Where the ... are my pants?!
That is because IRL we always have something that is lit enough to cause a reflection on our alway moist and shiny eyeballs....sometime reflections are cause by lamps as well. For instance when using two lightboxes in a butterfly configuration you can see 2 roundish stripes of light on the retina (eh Iris of course what where you thinking Ed). When using a ringflash for a model shoot the model looks like an early 2000 BMW.......Good old Rembrandt used slightly off white paint to paint in reflections, almost as the last touch to his paintings (the verry last touch was selling the thing, since he had to make a chow somehow).
In redering I sometimes place a rather larg emitting rectangular shape behind the camera to mimic light flowing in form a Window.......if you are clever you can even put it in front of the camera making the object itself invisible and using only it's light.
Eyes without reflections look dead (IMHO).......
Greets, Ed.
No offense intended but much of this is in your head. I'm not quite sure how to state this. I've tried to explain this to you in the past I believe, unless I am getting you confused with someone else.
I think it unwise to try to draw comparisons between unbiased rendering engines unless the lighting and materials are set up the exact same way for both renders. Same environmental setting, same lighting conditions, same camera placement and viewing angles, same camera type, same exposure and other settings all need to be exact or the comparison is pointless. For example, I'd assume the reason why the eye look different in Octane than in Iray is because in the Iray render the "sun" is in front of the model so it catches the eyes in highlight. The Octane render by comparison seems to have the sun behind his head, so his eye remain in shadow.
If you are looking for any sort of render engine difference I doubt you will ever find one that is in any way meaningful. Iray is equal to Octane because they are both unbiased so there is a very limited range of divergence allowed between them. They simply don't have the option to diverge too greatly. If you provide each of the them with the exact same parameters to work with they MUST produce results that are identical or you can take them to court and litigate them. The key here is providing EXACTLY the same inputs, and that means knowing how to translate the terms in Iray to the terms in Octane and the painstaking task of manually ensuring the aeverything is perfectly aligned which is an incredible amount of work. Some of the terms are the same and values are also similar, in other cases not so much and that is why you see so few legit render comparisons.
For example, there was at one time a lot of talk about how LuxRender was somehow more "accurate" than Octane, as was often touted by Pauolo and some others. The argument was that Octane was faster because it was less accurate and that Luxrender was worth the extra time for the added quality. But the truth is that to a major extent all of the unbiased engines are the same and produce the exact same results if you can find some way to properly match the inputs.
So I'd say that time spent comparing engines is essential time that is wasted unless you set out from the start to compare them, and you build the scene toward that purpose from the beginning. At this point of unbiased saturation we are better off focusing on how to work with unbiased engines in a general sense, than we are for looking for differences in output. Because again, any differences we discover will almost always come down to an issue with the user, not with the underlying math of the applications because it is all the same or it wouldn't be unbiased.
Both renders are nice and realistic to equal degrees in my personal view. I actually prefer the Octane one for the environment, but I love Iray as well so for me it is a win win.
...this is why I created the scene of the girls at the bus stop. The scene purposely involved a number of difficult elements including reflectivity, transparency, and subsurface scattering. Originally I set it up to compare the differences between pushing 3DL as far as I could and Reality 2.5/Lux. Well, in the midst of the test, Reality4 was released which supposedly supported SSS. Sadly the initial release had some serious bugs, one of which was if a scene was older (or processed in an earlier version of Reality), none of the surfaces would show in the materials tab. Now I wasn't going to rebuild the scene from scratch so I just made a copy of the 3DL version and had to reconvert all the materials again. Each time a new patch was issued I had to go through the same process all over again as well as deal with the glacially long render times that made Bryce's render engine look like a speed demon. I eventually gave up out of frustration and uninstalled Reality4 and Lux.
When the Daz 4.8 beta was released in March I decided to revisit the experiment and made a copy of the scene for Iray. After several tests I ran the final render which I posted in the first incarnation of this thread (total render time just over three and a half hours with maximum render time set to 4 hrs and convergence to 99%). Unfortunately as I never was able to get a good clean render in Lux, even after upwards of 13 hours, I couldn't make a proper comparison of quality. between the two. From a workflow standpoint, Iray won hands down even though I, and many others, were pretty much "flying by the seat of the pants" with it..
As to comparing the Iray render to the 3DL one, a total apples & oranges situation. 3DL has some advantages such as render time (especially in 4.8), being able to use a skydome with a distant light for the sun, using shaders like AoA's grass/rock ones, effects cameras, or hair generation plugins like Garibaldi or LAMH. However, even with all the tweaks I made, it couldn't match the realistic quality of Iray.
Nothing special, just more messing around with Iray and shaders. I liked the results so I thought I would share it. Custom figure/textures using V6 HD and the Pixar HDRI.
Best if viewed at full resolution.
Still debating the look of a future character in my webcomic, Ambassador Aleph of the Forn Assembly.
The Forn Assembly consists primarily of mai (robots) who don't feel compelled to have humanoid bodies, so I want something that stands out as a little alien.
Previous attempt was a bit too much 'standard hard robot' like other mai. This is another attempt, but I'm not sure the somewhat humanoid elements work or I should go with something more abstract (maybe like the previous ball robot).
Thoughts?
(Also, I'd like to say that supersuit + bot genesis play very nicely together -- the supersuit 'below the skin' ends up making this delightful surface in the gaps between bot pieces. With transparency, niftiness)
How about human eyes?
Greets, Ed.
Use one of the iray water shaders on the Eye Reflection and Cornea surfaces.
In answer to the Pixel Filter, because I was confused by this as well:
http://wiki.bk.tudelft.nl/toi-pedia/Rendering_Mental_Ray:_Anti_Aliasing_Settings
This is an experiment I tried just to see if it would work, and it turned out better than I expected.
The only light in the scene is coming from the stone lantern. This model has a lamp inside the enclosure, what I did was put the IRay shader on the lamp flame and turn it into a light emitter. It is emitting at a temperature of 2400K (a bit too hot for a real flame but the scene gets too orange for my taste if I use a real candle flame temperature).
I see Iray is gathering more steam..
Everything turned out pretty much the way I wanted it in this render.. except for the Glow Stick..