Show Us Your Iray Renders. Part III

1353638404150

Comments

  • sheedee3Dsheedee3D Posts: 214
    edited May 2015

    And Iray produces stunning realism!...

    I let this one bake up to 50%...could have let it go more...but i felt it was not really needed...most of the noise was gone anyway...and she looks fantastic already!...

    no postwork was applied...just straight out of the box.

    best viewed at full resolution.

    jhgvjhjhgv.jpg
    1500 x 1500 - 1M
    Post edited by sheedee3D on
  • SickleYieldSickleYield Posts: 7,644
    edited December 1969

    pearbear said:
    MEC4D said:
    I use GOZ to export to Zbrush.. for work and stuff for JMCs and others , for my clothing exporting and importing manually
    also it is better to export the morph from Zbrush and import manually at Cinema or lightwave profile so you have a copy in case DS crash before you saved it

    the good thing is that after you created for example face displacement , you can import with GOZ back to studio with the values included for displacement maps and that is important so you don't guessing here for best result

    much luck with GOZ

    Yay, it worked! I tried it out with some quick alpha brush mech bolts using GOZ to Z-Brush and back. (sorry G2F, I know it ain't pretty) This is by far the best success I've had with displacement maps. I usually have issues with seams that are especially apparent when the figure is backlit like this. But it worked like a charm! I even put a giant bolt on the middle of the shoulder UV seam, and it matched up great. Figure is at sub-d 2 and displacement sub-d is at 3, working with no problems. This opens up a lot of possibilities for me, thank you for the great help! I owe you a Jack and Coke.

    (edit - I cropped the photo because the bolts at the shoulder were just so ugly looking. A good test of displacement with strangely horrifying results)

    Sorry to bring up an old post, but I've just gotten around to trying this, and I'm confused. When I try to GoZ the sculpt back to G2F in DS, it offers to make a morph, but if I uncheck update geometry and instead choose update materials, it makes the figure blank white and does not import my displacement at all. Is there a workup step between "sculpt in Zbrush" and "GoZ" that I'm missing?

  • GlennFGlennF Posts: 141
    edited December 1969

    At the end of the day.

    I_Built_That.jpg
    932 x 846 - 568K
  • DamselDamsel Posts: 385
    edited December 1969

    Technical problem. I told y'all I was working on an image called LADY OF THE LAKE, which included wet fabric, wet skin, and lake water. The artist I was working with loved the image, but wanted me to make a very big one we could print on canvas. Very big as in 4200 by 6300, DPI of 350. Now, I did this twice. First I let it render overnight, but it was still really grainy, so I tried again--let it render 12 hours, or 43,200 seconds, quality of 20, 15,000 samples. It's STILL grainy as hell. Can anybody suggest anything? I thought about rendering it smaller and then blowing it up with Alien Skin blowup, which I have done before, though never with a project so large. Yet even then, I'd probably end up with graininess. You can see the problem. The upper part of her face is relatively clear, but the shadowed part, which probably has the reflections from the shine of the wet skin, is problematic. My partner might be able to put it through a noise removal filter, but I'm afraid that will kill too many details.

    closeup.jpg
    905 x 1000 - 281K
  • L'AdairL'Adair Posts: 9,479
    edited December 1969

    Damsel said:
    Technical problem. I told y'all I was working on an image called LADY OF THE LAKE, which included wet fabric, wet skin, and lake water. The artist I was working with loved the image, but wanted me to make a very big one we could print on canvas. Very big as in 4200 by 6300, DPI of 350. Now, I did this twice. First I let it render overnight, but it was still really grainy, so I tried again--let it render 12 hours, or 43,200 seconds, quality of 20, 15,000 samples. It's STILL grainy as hell. Can anybody suggest anything? I thought about rendering it smaller and then blowing it up with Alien Skin blowup, which I have done before, though never with a project so large. Yet even then, I'd probably end up with graininess. You can see the problem. The upper part of her face is relatively clear, but the shadowed part, which probably has the reflections from the shine of the wet skin, is problematic. My partner might be able to put it through a noise removal filter, but I'm afraid that will kill too many details.

    When I need the best quality render I can get, I set Max Time to 0 (it used to be -1, but DAZ changed that, thankfully,) and set Image Convergence to 100%. I let one render, 4000 x 5127 pixels, go on for almost 4 days, CPU only, to get the quality I needed. I may take a long time, but if you have a lot of dark areas, that extra 5% can make a big difference. I've never set Quality above 1 for anything.

  • MBuschMBusch Posts: 547
    edited December 1969

    ACross said:
    Damsel said:
    Technical problem. I told y'all I was working on an image called LADY OF THE LAKE, which included wet fabric, wet skin, and lake water. The artist I was working with loved the image, but wanted me to make a very big one we could print on canvas. Very big as in 4200 by 6300, DPI of 350. Now, I did this twice. First I let it render overnight, but it was still really grainy, so I tried again--let it render 12 hours, or 43,200 seconds, quality of 20, 15,000 samples. It's STILL grainy as hell. Can anybody suggest anything? I thought about rendering it smaller and then blowing it up with Alien Skin blowup, which I have done before, though never with a project so large. Yet even then, I'd probably end up with graininess. You can see the problem. The upper part of her face is relatively clear, but the shadowed part, which probably has the reflections from the shine of the wet skin, is problematic. My partner might be able to put it through a noise removal filter, but I'm afraid that will kill too many details.

    When I need the best quality render I can get, I set Max Time to 0 (it used to be -1, but DAZ changed that, thankfully,) and set Image Convergence to 100%. I let one render, 4000 x 5127 pixels, go on for almost 4 days, CPU only, to get the quality I needed. I may take a long time, but if you have a lot of dark areas, that extra 5% can make a big difference. I've never set Quality above 1 for anything.

    The settings mentioned by ACross are your best option to get the job done. Anyway it not solves your main problem which is the big size of the final output. I doubt that your book cover will be 12 X 18 inches, so I think this is the size to a promotional material, probably a poster. Your artist partner is using a rule from the printing industry which tells that the image resolution should be the double of the printing halftone screen line frequency, which means if want print an image using 177 lpi your image should have something like 354 dpi or 354 ppi. LPI (lines per inch) is an important measurment related to the way printers reproduce photographic images. The LPI is dependent on the output device and the type of paper. On the Web, LPI is not a factor because images display on-screen in pixels (PPI).

    The Typical LPI for offset printing ranges from 85-133 lines per inch. The figures are much lower for screen printing and laser printing. High quality offset or gravure printing such as for glossy color magazines may go as high as 300 LPI.

    In calculating the required resolution for an image, LPI (based on type of paper and printing method) X 2 is the most commonly used formula (i.e. 133 LPI X 2 = 266 required SPI).

    If your computer do not have the necessary power to output an image at this size in a reasonable timeframe you can output your image with something like 3192 X 4788 pixels in size, which is 1.5 of the print halftone frequency. Someone can tell you that you will sacrifice the final printing quality, but I can assure you that is not something visible to the spectator as a poster is made to be viewed from a certain distance. With this resolution you will reduce the render time and will get a full resolution image to the cover and a very descent resolution to the poster. I hope this help you.

  • ArtisanSArtisanS Posts: 209
    edited December 1969

    Nice artwork Damsel......I'm redering a scene now and it's running 58 minutes and 56 seconds.....and oh it's 0.66% done.....lots of reflective chrome and silk and other difficult materials in the scene. The grainyness in your shot is partly due to the darkness.

    Trick 1: Cranck up the light and edit down in Photoshop

    Trick 2: Go for a lower resoltulion, in my experience a well rendered CGI has twice the linear resolution compared to a digital camera of the same resolution (no bayer array in place in the CGI world). So upresing is no problem at all......

    Trick 3: Use NeatImage to get rid of some noise.....it's somsething I use a lot these days to denoise some of my older digital photo's (sensors have improved a lot over the years and retaking shots is well, difficult at best).....i've also used it on some grainy renders from Cycles (no NVidia card so 500 cycles at 4000 x 3000) is all I can afford......a 5 hour wait is enough in my book......

    Trick 4: Buy hardware......multiple NVidia cards (no SLI) will speed up the render considerably......

    Trick 5: Download Blender and use Cycles instaed of Iray, both work better on Nvidia CUDA core then on Intel processors.....bu Cycles is a lot more efficient on the processor alone....however CUDA rules....completely!

    Greets, Ed.......who's a 1008 itterations and 0.68% done! And 1 hour and 9 minutes are gone.....I love an i5 :-)!

  • lucidghostlucidghost Posts: 73
    edited December 1969

    WIP of a reptilian character from my novel. Still messing around with the gloss settings to get it looking right, but like how it's coming. Iray rocks!!

    Memnon_portrait.png
    500 x 707 - 458K
  • MEC4DMEC4D Posts: 5,249
    edited December 1969

    because you choice to not import your morph .. the morph is important part of the displacement and the displacement settings will be imported for the morph in general not the base

    pearbear said:
    MEC4D said:
    I use GOZ to export to Zbrush.. for work and stuff for JMCs and others , for my clothing exporting and importing manually
    also it is better to export the morph from Zbrush and import manually at Cinema or lightwave profile so you have a copy in case DS crash before you saved it

    the good thing is that after you created for example face displacement , you can import with GOZ back to studio with the values included for displacement maps and that is important so you don't guessing here for best result

    much luck with GOZ

    Yay, it worked! I tried it out with some quick alpha brush mech bolts using GOZ to Z-Brush and back. (sorry G2F, I know it ain't pretty) This is by far the best success I've had with displacement maps. I usually have issues with seams that are especially apparent when the figure is backlit like this. But it worked like a charm! I even put a giant bolt on the middle of the shoulder UV seam, and it matched up great. Figure is at sub-d 2 and displacement sub-d is at 3, working with no problems. This opens up a lot of possibilities for me, thank you for the great help! I owe you a Jack and Coke.

    (edit - I cropped the photo because the bolts at the shoulder were just so ugly looking. A good test of displacement with strangely horrifying results)

    Sorry to bring up an old post, but I've just gotten around to trying this, and I'm confused. When I try to GoZ the sculpt back to G2F in DS, it offers to make a morph, but if I uncheck update geometry and instead choose update materials, it makes the figure blank white and does not import my displacement at all. Is there a workup step between "sculpt in Zbrush" and "GoZ" that I'm missing?

  • MEC4DMEC4D Posts: 5,249
    edited December 1969

    Playing with animation
    3 frames per minute with GTX 760 .. total 1560 frames

    check out the IRAYMAN lol

    https://plus.google.com/+Mec4D/posts/1pTRYAenSfE

  • SimonJMSimonJM Posts: 5,999
    edited December 1969

    Did a render for LlolaLane's Render a Month Challenge so for a very quick and dirty test of Iray I re-did it, with minimal changes (just swapped out the cigarette for the one from the Fatale Noir set, and slight pose tweaks to account for it). The smoke remains a prop from Smay's Fire and Smoke set.
    Iray changes include the lights and setting the ash part of the fag to be an emitter. applying Iray Uber to the smoke and tweaking, and changign the tear to be Iray Uber, thin water.

    Not_Grey.jpg
    1000 x 1000 - 337K
  • Rashad CarterRashad Carter Posts: 1,803
    edited December 1969

    pete.c44 said:
    This is a quick and dirty side by side with Iray (LEFT)and OctaneRender(RIGHT)...Iray took around 30 minutes....Octane took 15 to achieve the same resolution. It's not a fair comparison because the lighting setups are different...but I think Iray compares favorably with Octane. Iray has less setup time and translates materials better than Octane. Look around the eyes of the Octane render...I would normally have to do post work on that eyelash flap and eye reflection. Another variable is that I have more experience with Octane. I just picked up Iray and started to use it. Considering the price....Iray is a real breakthrough for hobbyists. I suppose the Octane render is more photorealistic...but the Iray render is more dramatic..IMHO.

    No offense intended but much of this is in your head. I'm not quite sure how to state this. I've tried to explain this to you in the past I believe, unless I am getting you confused with someone else.

    I think it unwise to try to draw comparisons between unbiased rendering engines unless the lighting and materials are set up the exact same way for both renders. Same environmental setting, same lighting conditions, same camera placement and viewing angles, same camera type, same exposure and other settings all need to be exact or the comparison is pointless. For example, I'd assume the reason why the eye look different in Octane than in Iray is because in the Iray render the "sun" is in front of the model so it catches the eyes in highlight. The Octane render by comparison seems to have the sun behind his head, so his eye remain in shadow.

    If you are looking for any sort of render engine difference I doubt you will ever find one that is in any way meaningful. Iray is equal to Octane because they are both unbiased so there is a very limited range of divergence allowed between them. They simply don't have the option to diverge too greatly. If you provide each of the them with the exact same parameters to work with they MUST produce results that are identical or you can take them to court and litigate them. The key here is providing EXACTLY the same inputs, and that means knowing how to translate the terms in Iray to the terms in Octane and the painstaking task of manually ensuring the aeverything is perfectly aligned which is an incredible amount of work. Some of the terms are the same and values are also similar, in other cases not so much and that is why you see so few legit render comparisons.

    For example, there was at one time a lot of talk about how LuxRender was somehow more "accurate" than Octane, as was often touted by Pauolo and some others. The argument was that Octane was faster because it was less accurate and that Luxrender was worth the extra time for the added quality. But the truth is that to a major extent all of the unbiased engines are the same and produce the exact same results if you can find some way to properly match the inputs.

    So I'd say that time spent comparing engines is essential time that is wasted unless you set out from the start to compare them, and you build the scene toward that purpose from the beginning. At this point of unbiased saturation we are better off focusing on how to work with unbiased engines in a general sense, than we are for looking for differences in output. Because again, any differences we discover will almost always come down to an issue with the user, not with the underlying math of the applications because it is all the same or it wouldn't be unbiased.

    Both renders are nice and realistic to equal degrees in my personal view. I actually prefer the Octane one for the environment, but I love Iray as well so for me it is a win win.

  • MusicplayerMusicplayer Posts: 515
    edited May 2015

    SimonJM said:
    Did a render for LlolaLane's Render a Month Challenge so for a very quick and dirty test of Iray I re-did it, with minimal changes (just swapped out the cigarette for the one from the Fatale Noir set, and slight pose tweaks to account for it). The smoke remains a prop from Smay's Fire and Smoke set.
    Iray changes include the lights and setting the ash part of the fag to be an emitter. applying Iray Uber to the smoke and tweaking, and changign the tear to be Iray Uber, thin water.

    Nice render Simon, looks so much like the actress and filmstar Bette Davis. Maybe, just a slight twist of the hand to see the cigarette detail better (only a thought) ;-)

    Edited to note : Just seen your 'Render Challenge' picture has the cigarette in a more visible position. Looks even more like Bette Davis in this Black and white image. Very nice.

    Cheers :-)

    Post edited by Musicplayer on
  • MEC4DMEC4D Posts: 5,249
    edited December 1969

    The key here is providing EXACTLY the same inputs, and that means knowing how to translate the terms in Iray to the terms in Octane and the painstaking task of manually ensuring the everything is perfectly aligned which is an incredible amount of work. Some of the terms are the same and values are also similar, in other cases not so much and that is why you see so few legit render comparisons.

    Thanks for mentioning it , people do not realize that , the only difference they will see are kn the scene and material they set up and that is
    Most of people render in Direct light in Octane what is Interactive for Iray so another fall .
    To have it well done, all materials need to be accurate PBR materials with the same value and not other, then truly you will see the results .

  • Robert FreiseRobert Freise Posts: 4,484
    edited December 1969

    WIP of a reptilian character from my novel. Still messing around with the gloss settings to get it looking right, but like how it's coming. Iray rocks!!

    Cool
    Looks like Grig from The Last Starfighter

  • Rashad CarterRashad Carter Posts: 1,803
    edited December 1969

    sheedee3D said:
    Kamion99 said:
    namffuak said:
    tjohn said:
    Check your old real-world photos. How many eyes have highlights? Unless it is a close-up portrait photo, I'm guessing very few.
    I keep wanting them as well, but realistically speaking a real camera at a distance wouldn't pick up any obvious eye reflection, would it?

    What you'll find in most photos is the infamous red-eye reflection of the flash off the retina..

    This is why I think aiming for "perfect photorealism" is overrated. Most photographs we see and want to emulate are "Photoshopped" (they even managed it in the 30's). Photoreal doesent mean it looks good either, there are plenty of photographs that look like complete crap, I know I've taken a lot of them.

    There was a thread on one of the blender forums, someone made an image that looked completely photoreal, unfortunately that photo was a crappy polaroid with flash. It was an excellent technical experiment, but not a compelling image.

    As far as I'm concerned, don't worry too much about whether something is perfectly photorealistic, just worry about whether it looks good.

    @RawArt Thank you! looks great and dead simple to use.

    So true...Kamion99!...at the end of the day it just has to look convincing and good.

    Yes and no. "Looks good" is a rather subjective argument and has more to do with the way the individual artist has trained his eye to expect certain things and not to expect others. One thing about the "real deal" is that it always is surprising. I always "think" I know what a blade of grass looks like but every time I observe one in real life I always discover something new I'd never seen before.

    To a great extent many of us suffer in PBR simulations because we have our "eyes" trained by bad habits gained from years and thousands of renders made with biased engines. We have to re-learn right from wrong so that our expectations are in line with reality.

    For example there was a poster a little bit ago asking about eye reflections, wondering about why they were not visible in the Iray render, expecting that the reflections were supposed to be mapped onto the corneas UV map. Another poster explained that painted on reflection maps were a legacy cheat often employed in 3Delight or Poser but that such would be very wrong in Iray. The user explained that there needs to be a prominent light source in the scene for the eyes to reflect, just like in real life. So basically, the original user was expecting something due to bad habits gained from experience in rendering but not from real life experience. Because as another posted also stated, in real life you rarely see distinct eye reflections, yet in renders we've convinced ourselves over the years that those reflections are paramount for realism but in fact they are not.

    I think it is okay to let the render engine to teach you from time to time, just like you'd let a real photograph teach you. You'd never argue with the realism of a real photo, and it should be similar with Iray. Iray will do what it is told even if it doesn't look "right"

    Realism and art are a tricky combo to begin with. I drive past fields of grass and trees every day, but rarely have I pulled over to really OBSERVE the plants in detail. Because I know they are real my unconscious mind assumes I've already seen plants like this before, so my eyes tend to look at other more interesting things that are "new."

    In many of the most realistic renders especially in Iray and Lux and Octane..., viewers tend to assume the image is entirely real and so they don't tend to look very closely at it. Only once they are made aware it is fake do they then begin to appreciate the attention to detail that made the image appear to be realistic in the first place. Realism once it reaches a certain degree of accuracy can in a sense be quite boring because the viewers will assume there is nothing here that he hasn't already seen in some way so they lose interest quickly. Dedicated art however, that doesn't conform to physical laws, might produce more interesting images because so much of it will be things the eye doesn't see in real life, and that "extra headroom" might even provide for more opportunity for "commentary" by the individual artist.

    All this to say that one needs to decide which they value more, that it "looks good" based on quite possibly misguided expectations, or that it is better to value accuracy, which has nothing to do with person preference but with technical attention to detail. Realize that looking good is based on personal biases, while accuracy references universal physics. One leaves room for you to tweak it, the other does not.

    Fun fun!!!!

  • kyoto kidkyoto kid Posts: 41,260
    edited December 1969

    MEC4D said:
    Playing with animation
    3 frames per minute with GTX 760 .. total 1560 frames

    check out the IRAYMAN lol

    https://plus.google.com/+Mec4D/posts/1pTRYAenSfE


    ...pretty nice.
  • lucidghostlucidghost Posts: 73
    edited December 1969

    WIP of a reptilian character from my novel. Still messing around with the gloss settings to get it looking right, but like how it's coming. Iray rocks!!

    Cool
    Looks like Grig from The Last Starfighter

    ha! I can totally see that :lol:

  • ArtisanSArtisanS Posts: 209
    edited December 1969

    Just anotherday for Fiery in the restaurant of the Hindenburg! Took about 2.5 hours to render while I was discussing 3D with some freinds.......I must say it finished on the 5000 cycles max I had set......now I'm using a i5 bread and butter computer with an AMD videocard that is nice enough but does not realy contain to many CUDA engines......in Blender Cycles that is not a big deal.....I can render 400 cycles deep and have 4000 x 3000 pixel print ready in 5 hours (of the same scene).....and then I have about as much fireflies as this one (which is 1000 x 1000).......but the sheer quality of the materials in Iray is breathtaking......so build in Blender, UV unwrap in Blender, testrender in Blender.....then .obj en of to DAZ and Iray........wil invest in the biggest bad ass GTX my money can buy (and my money is to tight to mention)........but I'll squeeze out a 980 because the materials oozzzzzzzzz, the silk is silky, the chrome shines like a 58 corvette bumper polished using Aaron's beard and the red velvet........damned where's Marylin!

    Great stuff..................

    Greets, Ed,

    Meanwhile_in_the_restaurant_of_the_Hindenburg.png
    1000 x 1000 - 2M
  • j cadej cade Posts: 2,310
    edited December 1969

    Yes and no. "Looks good" is a rather subjective argument and has more to do with the way the individual artist has trained his eye to expect certain things and not to expect others. One thing about the "real deal" is that it always is surprising. I always "think" I know what a blade of grass looks like but every time I observe one in real life I always discover something new I'd never seen before.

    To a great extent many of us suffer in PBR simulations because we have our "eyes" trained by bad habits gained from years and thousands of renders made with biased engines. We have to re-learn right from wrong so that our expectations are in line with reality.

    For example there was a poster a little bit ago asking about eye reflections, wondering about why they were not visible in the Iray render, expecting that the reflections were supposed to be mapped onto the corneas UV map. Another poster explained that painted on reflection maps were a legacy cheat often employed in 3Delight or Poser but that such would be very wrong in Iray. The user explained that there needs to be a prominent light source in the scene for the eyes to reflect, just like in real life. So basically, the original user was expecting something due to bad habits gained from experience in rendering but not from real life experience. Because as another posted also stated, in real life you rarely see distinct eye reflections, yet in renders we've convinced ourselves over the years that those reflections are paramount for realism but in fact they are not.

    I think it is okay to let the render engine to teach you from time to time, just like you'd let a real photograph teach you. You'd never argue with the realism of a real photo, and it should be similar with Iray. Iray will do what it is told even if it doesn't look "right"

    Realism and art are a tricky combo to begin with. I drive past fields of grass and trees every day, but rarely have I pulled over to really OBSERVE the plants in detail. Because I know they are real my unconscious mind assumes I've already seen plants like this before, so my eyes tend to look at other more interesting things that are "new."

    In many of the most realistic renders especially in Iray and Lux and Octane..., viewers tend to assume the image is entirely real and so they don't tend to look very closely at it. Only once they are made aware it is fake do they then begin to appreciate the attention to detail that made the image appear to be realistic in the first place. Realism once it reaches a certain degree of accuracy can in a sense be quite boring because the viewers will assume there is nothing here that he hasn't already seen in some way so they lose interest quickly. Dedicated art however, that doesn't conform to physical laws, might produce more interesting images because so much of it will be things the eye doesn't see in real life, and that "extra headroom" might even provide for more opportunity for "commentary" by the individual artist.

    All this to say that one needs to decide which they value more, that it "looks good" based on quite possibly misguided expectations, or that it is better to value accuracy, which has nothing to do with person preference but with technical attention to detail. Realize that looking good is based on personal biases, while accuracy references universal physics. One leaves room for you to tweak it, the other does not.

    Fun fun!!!!

    While not all photos may have reflections in the eye I've yet to see a person in the real world that didn't. Perhaps not strong, but they're always there, so I would argue that a result that leads to no reflection in the eye is incorrect even if everything seems physically correct.

    The real thing is that while the raytracer is physically correct the environment still isn't, unless you're modelling the whole chunk of the physical world around your scene including the atmosphere (this is why people like hdrs so much as they can get fairly close). But we obviously can't do that so we cheat a little instead.

  • Arnold CArnold C Posts: 740
    edited December 1969

    Glad to say 4.8 successfully updated my Studio 4.7 installation with no problems, and installed all my plug-ins. So here is my first render with the new Daz Studio 4.8 Pro.

    Hope the Iray threads continue to grow and blossom now we are out of the beta stage and in full flight. :lol:

    Cheers :-)

    Really nice one!
    He looks like as if he had lost his biker suit at full speed :)

    Where the ... are my pants?!

  • ArtisanSArtisanS Posts: 209
    edited May 2015

    That is because IRL we always have something that is lit enough to cause a reflection on our alway moist and shiny eyeballs....sometime reflections are cause by lamps as well. For instance when using two lightboxes in a butterfly configuration you can see 2 roundish stripes of light on the retina (eh Iris of course what where you thinking Ed). When using a ringflash for a model shoot the model looks like an early 2000 BMW.......Good old Rembrandt used slightly off white paint to paint in reflections, almost as the last touch to his paintings (the verry last touch was selling the thing, since he had to make a chow somehow).

    In redering I sometimes place a rather larg emitting rectangular shape behind the camera to mimic light flowing in form a Window.......if you are clever you can even put it in front of the camera making the object itself invisible and using only it's light.

    Eyes without reflections look dead (IMHO).......

    Greets, Ed.

    Post edited by ArtisanS on
  • kyoto kidkyoto kid Posts: 41,260
    edited May 2015

    pete.c44 said:
    This is a quick and dirty side by side with Iray (LEFT)and OctaneRender(RIGHT)...Iray took around 30 minutes....Octane took 15 to achieve the same resolution. It's not a fair comparison because the lighting setups are different...but I think Iray compares favorably with Octane. Iray has less setup time and translates materials better than Octane. Look around the eyes of the Octane render...I would normally have to do post work on that eyelash flap and eye reflection. Another variable is that I have more experience with Octane. I just picked up Iray and started to use it. Considering the price....Iray is a real breakthrough for hobbyists. I suppose the Octane render is more photorealistic...but the Iray render is more dramatic..IMHO.

    No offense intended but much of this is in your head. I'm not quite sure how to state this. I've tried to explain this to you in the past I believe, unless I am getting you confused with someone else.

    I think it unwise to try to draw comparisons between unbiased rendering engines unless the lighting and materials are set up the exact same way for both renders. Same environmental setting, same lighting conditions, same camera placement and viewing angles, same camera type, same exposure and other settings all need to be exact or the comparison is pointless. For example, I'd assume the reason why the eye look different in Octane than in Iray is because in the Iray render the "sun" is in front of the model so it catches the eyes in highlight. The Octane render by comparison seems to have the sun behind his head, so his eye remain in shadow.

    If you are looking for any sort of render engine difference I doubt you will ever find one that is in any way meaningful. Iray is equal to Octane because they are both unbiased so there is a very limited range of divergence allowed between them. They simply don't have the option to diverge too greatly. If you provide each of the them with the exact same parameters to work with they MUST produce results that are identical or you can take them to court and litigate them. The key here is providing EXACTLY the same inputs, and that means knowing how to translate the terms in Iray to the terms in Octane and the painstaking task of manually ensuring the aeverything is perfectly aligned which is an incredible amount of work. Some of the terms are the same and values are also similar, in other cases not so much and that is why you see so few legit render comparisons.

    For example, there was at one time a lot of talk about how LuxRender was somehow more "accurate" than Octane, as was often touted by Pauolo and some others. The argument was that Octane was faster because it was less accurate and that Luxrender was worth the extra time for the added quality. But the truth is that to a major extent all of the unbiased engines are the same and produce the exact same results if you can find some way to properly match the inputs.

    So I'd say that time spent comparing engines is essential time that is wasted unless you set out from the start to compare them, and you build the scene toward that purpose from the beginning. At this point of unbiased saturation we are better off focusing on how to work with unbiased engines in a general sense, than we are for looking for differences in output. Because again, any differences we discover will almost always come down to an issue with the user, not with the underlying math of the applications because it is all the same or it wouldn't be unbiased.

    Both renders are nice and realistic to equal degrees in my personal view. I actually prefer the Octane one for the environment, but I love Iray as well so for me it is a win win.
    ...this is why I created the scene of the girls at the bus stop. The scene purposely involved a number of difficult elements including reflectivity, transparency, and subsurface scattering. Originally I set it up to compare the differences between pushing 3DL as far as I could and Reality 2.5/Lux. Well, in the midst of the test, Reality4 was released which supposedly supported SSS. Sadly the initial release had some serious bugs, one of which was if a scene was older (or processed in an earlier version of Reality), none of the surfaces would show in the materials tab. Now I wasn't going to rebuild the scene from scratch so I just made a copy of the 3DL version and had to reconvert all the materials again. Each time a new patch was issued I had to go through the same process all over again as well as deal with the glacially long render times that made Bryce's render engine look like a speed demon. I eventually gave up out of frustration and uninstalled Reality4 and Lux.

    When the Daz 4.8 beta was released in March I decided to revisit the experiment and made a copy of the scene for Iray. After several tests I ran the final render which I posted in the first incarnation of this thread (total render time just over three and a half hours with maximum render time set to 4 hrs and convergence to 99%). Unfortunately as I never was able to get a good clean render in Lux, even after upwards of 13 hours, I couldn't make a proper comparison of quality. between the two. From a workflow standpoint, Iray won hands down even though I, and many others, were pretty much "flying by the seat of the pants" with it..

    As to comparing the Iray render to the 3DL one, a total apples & oranges situation. 3DL has some advantages such as render time (especially in 4.8), being able to use a skydome with a distant light for the sun, using shaders like AoA's grass/rock ones, effects cameras, or hair generation plugins like Garibaldi or LAMH. However, even with all the tweaks I made, it couldn't match the realistic quality of Iray.

    Post edited by kyoto kid on
  • DustRiderDustRider Posts: 2,800
    edited May 2015

    Nothing special, just more messing around with Iray and shaders. I liked the results so I thought I would share it. Custom figure/textures using V6 HD and the Pixar HDRI.

    Best if viewed at full resolution.

    speedtest3.jpg
    1429 x 2000 - 547K
    Post edited by DustRider on
  • Oso3DOso3D Posts: 15,047
    edited May 2015

    Still debating the look of a future character in my webcomic, Ambassador Aleph of the Forn Assembly.

    The Forn Assembly consists primarily of mai (robots) who don't feel compelled to have humanoid bodies, so I want something that stands out as a little alien.

    Previous attempt was a bit too much 'standard hard robot' like other mai. This is another attempt, but I'm not sure the somewhat humanoid elements work or I should go with something more abstract (maybe like the previous ball robot).

    Thoughts?

    (Also, I'd like to say that supersuit + bot genesis play very nicely together -- the supersuit 'below the skin' ends up making this delightful surface in the gaps between bot pieces. With transparency, niftiness)

    Ambassador_Aleph.jpg
    1538 x 2000 - 2M
    Post edited by Oso3D on
  • ArtisanSArtisanS Posts: 209
    edited December 1969

    How about human eyes?

    Greets, Ed.

  • JimbowJimbow Posts: 557
    edited December 1969

    Kamion99 said:
    [While not all photos may have reflections in the eye I've yet to see a person in the real world that didn't.

    Use one of the iray water shaders on the Eye Reflection and Cornea surfaces.

  • DrowElfMorwenDrowElfMorwen Posts: 538
    edited December 1969

    I did see in a post on one of the forums, it was suggested to go into Render Settings - Filtering - Pixel Filter, and there is a selection of filters : Box, Triangle, Gaussian, Mitchell, and Lanczos, to use.

    In answer to the Pixel Filter, because I was confused by this as well:

    http://wiki.bk.tudelft.nl/toi-pedia/Rendering_Mental_Ray:_Anti_Aliasing_Settings

  • Peter WadePeter Wade Posts: 1,642
    edited December 1969

    This is an experiment I tried just to see if it would work, and it turned out better than I expected.

    The only light in the scene is coming from the stone lantern. This model has a lamp inside the enclosure, what I did was put the IRay shader on the lamp flame and turn it into a light emitter. It is emitting at a temperature of 2400K (a bit too hot for a real flame but the scene gets too orange for my taste if I use a real candle flame temperature).

    StoneLantern.jpg
    960 x 768 - 113K
  • RarethRareth Posts: 1,462
    edited December 1969

    I see Iray is gathering more steam..

    Everything turned out pretty much the way I wanted it in this render.. except for the Glow Stick..

    Racing.jpg
    1014 x 626 - 242K
This discussion has been closed.