Show Us Your Iray Renders. Part III

1404143454650

Comments

  • tomtom.wtomtom.w Posts: 140
    edited December 1969

    tomtom.w said:
    Kyoto Kid said:

    ...we need a batch render setup.

    We also need an "Iray exposure meter" that lets us measure correct exposure settings (exposure value, EV) with any conceivable light setup, like the open source one for VRay, or better (VRayLightMeter, https://vimeo.com/98576920 ).

    If you set the Viewport display mode to Iray there is a widget to the elft of the draw style sphere - click that then click, or drag, on an area that should be well lit to adjust the EV.
    Thanks. I'll try it.

  • KharmaKharma Posts: 3,214
    edited December 1969

    medeia said:
    Here is the final render with Cath's skin setting she posted in the other thread. I made a brow mask and micro normal as best I could (we use masks in reality too but its automated) and it will have to do until she realeases some human shaders. It rendered for 2 hours on CPU and I still see some grains in shadows.
    I am in love with iray ♥ I will have to look for a better VGA.

    This is beautiful..do you have a link to the post with the skin settings....I can't find it in such long postings. thank you

  • MoussoMousso Posts: 239
    edited December 1969

    Kharma said:
    medeia said:
    Here is the final render with Cath's skin setting she posted in the other thread. I made a brow mask and micro normal as best I could (we use masks in reality too but its automated) and it will have to do until she realeases some human shaders. It rendered for 2 hours on CPU and I still see some grains in shadows.
    I am in love with iray ♥ I will have to look for a better VGA.

    This is beautiful..do you have a link to the post with the skin settings....I can't find it in such long postings. thank you Thank you :)
    Its the 3rd post http://www.daz3d.com/forums/discussion/54734/P1350
    Play around with the glossiness and SSS amount if your skin turns out too shiny or red. Every skin texture is different :)

  • KharmaKharma Posts: 3,214
    edited December 1969

    medeia said:
    Kharma said:
    medeia said:
    Here is the final render with Cath's skin setting she posted in the other thread. I made a brow mask and micro normal as best I could (we use masks in reality too but its automated) and it will have to do until she realeases some human shaders. It rendered for 2 hours on CPU and I still see some grains in shadows.
    I am in love with iray ♥ I will have to look for a better VGA.

    This is beautiful..do you have a link to the post with the skin settings....I can't find it in such long postings. thank you

    Thank you :)
    Its the 3rd post http://www.daz3d.com/forums/discussion/54734/P1350
    Play around with the glossiness and SSS amount if your skin turns out too shiny or red. Every skin texture is different :)

    Thank you :)

  • Dumor3DDumor3D Posts: 1,316
    edited December 1969

    As for the 4GB, 6GB or 12GB of VRAM questions:

    I have done a fair amount of rendering now and so far 4GB has been enough. This includes one scene with a full environment with twelve people and cloths loaded. OpenGL was complaining as the scene was big enough that setting it up was getting difficult. That has been the situation so far with my 2 largest scenes.

    However, the use of VRAM is mostly about the size of the texture image files. From what I can tell, Iray is exceptional about recognizing multiple uses of the same texture and seems to only load it once. I do want to try more testing with this. I do know it loves tiling and that positively helps with VRAM use. I have a rather large environment set in the works, that renders just fine on a 2GB card even with one figure in the scene.

    Texture Atlas can help if the scene will not fit. There really isn't a set rule to what will work and what will not work as it really depends on the file sizes of the texture files for a product or the products in the scene. Main differences between Iray and 3DL is 3DL didn't care so much about textures sizes, they loaded into ram or swap. Iray doesn't care much about mesh efficiency and eats hi-poly meshes. I have been adjusting my products to make use of lower rez textures and saving those using more efficient methods.

    By the way, for those using PCs, there is a neat utility called GPU-Z which will display your video card statistics. It's a great way to see how much VRAM is in use during a render. Also, it will show if the card kicks out. And if a scene starts kicking out, you can hide a few items and see how much difference it makes in VRAM use, or even help if you have something loaded with huge textures which may need to be adjusted.

    Anyway, 4GB so far has been great for me. I also have a Titan X, just in case. I think my next card will be a GTX980Ti 6GB. I still believe the GTX980 at 4GB is perhaps the best CUDA cores per dollar with the 980Ti being very close. 2 of those are equal in price but faster than one Titan X by about 25%... as long as you don't break through the 4GB limit. Either way, I would suggest trying out GPU-Z so you can make a more informed decision based on some of your scenes and how you like to work.

    OH, and GPU-Z will also help you find what may be using VRAM on your system. Web browsers, email programs and so forth. A big desktop background image. Windows Aero... it all adds up. Another hint, if you have a two card system, if you put your monitor on the card with less power or VRAM, the system will use that for those other programs. This can easily be 0.5GB of VRAM or more. A second more powerful card can then be used for rendering.

    Hope that helps!

  • alexhcowleyalexhcowley Posts: 2,392
    edited December 1969

    Dumor3D said:
    As for the 4GB, 6GB or 12GB of VRAM questions:

    I have done a fair amount of rendering now and so far 4GB has been enough. This includes one scene with a full environment with twelve people and cloths loaded. OpenGL was complaining as the scene was big enough that setting it up was getting difficult. That has been the situation so far with my 2 largest scenes.

    However, the use of VRAM is mostly about the size of the texture image files. From what I can tell, Iray is exceptional about recognizing multiple uses of the same texture and seems to only load it once. I do want to try more testing with this. I do know it loves tiling and that positively helps with VRAM use. I have a rather large environment set in the works, that renders just fine on a 2GB card even with one figure in the scene.

    Texture Atlas can help if the scene will not fit. There really isn't a set rule to what will work and what will not work as it really depends on the file sizes of the texture files for a product or the products in the scene. Main differences between Iray and 3DL is 3DL didn't care so much about textures sizes, they loaded into ram or swap. Iray doesn't care much about mesh efficiency and eats hi-poly meshes. I have been adjusting my products to make use of lower rez textures and saving those using more efficient methods.

    By the way, for those using PCs, there is a neat utility called GPU-Z which will display your video card statistics. It's a great way to see how much VRAM is in use during a render. Also, it will show if the card kicks out. And if a scene starts kicking out, you can hide a few items and see how much difference it makes in VRAM use, or even help if you have something loaded with huge textures which may need to be adjusted.

    Anyway, 4GB so far has been great for me. I also have a Titan X, just in case. I think my next card will be a GTX980Ti 6GB. I still believe the GTX980 at 4GB is perhaps the best CUDA cores per dollar with the 980Ti being very close. 2 of those are equal in price but faster than one Titan X by about 25%... as long as you don't break through the 4GB limit. Either way, I would suggest trying out GPU-Z so you can make a more informed decision based on some of your scenes and how you like to work.

    OH, and GPU-Z will also help you find what may be using VRAM on your system. Web browsers, email programs and so forth. A big desktop background image. Windows Aero... it all adds up. Another hint, if you have a two card system, if you put your monitor on the card with less power or VRAM, the system will use that for those other programs. This can easily be 0.5GB of VRAM or more. A second more powerful card can then be used for rendering.

    Hope that helps!

    A friend of mine is about to build me a custom PC. This will have a 2GB 730 for the screen and a 4GB 970 for Iray renders. From what you say, I'm making the right choices.

    Cheers,

    Alex.

  • Dumor3DDumor3D Posts: 1,316
    edited December 1969

    A friend of mine is about to build me a custom PC. This will have a 2GB 730 for the screen and a 4GB 970 for Iray renders. From what you say, I'm making the right choices.

    Cheers,

    Alex.

    That IMO is a really good decision. The 970, with 1664 CUDA cores, is a very nice card at a very good price point. My perspective is a bit different. As a PA... Speed is much more important to me. When you do hundreds and hundreds of test renders, tons of thumbnails... speed gets very important. I've invested heavily into graphics cards. :)

  • frank0314frank0314 Posts: 14,330
    edited December 1969

    I just got a GTX 960 with 4GB and it is proving to be great for iray. Time is very precious as a PA cause time vs money earned is important to know if you need to change your workflow. Of course the faster you get done with stuff the less of a loss you will have. Many time in this profession you are looking at making at least minimum wage here in the states so every second counts. As stated by Dumor, a big chunk of time is consumed by hundred of renders to check and build your mats and thumbnails. Promos will normally take you a week to do quality renders. The faster and bigger gpu and cpu you have will greatly decrease your time.

  • j cadej cade Posts: 2,310
    edited December 1969

    medeia said:
    Here is the final render with Cath's skin setting she posted in the other thread. I made a brow mask and micro normal as best I could (we use masks in reality too but its automated) and it will have to do until she realeases some human shaders. It rendered for 2 hours on CPU and I still see some grains in shadows.
    I am in love with iray ♥ I will have to look for a better VGA.

    Really excellent portrait. I think half the realism comes from how naturally she is posed. Natural pose + some facial expression does way more to enhance realism than, say, the perfect specular values IMO.

    I'm also going to back up Mec4d about the graininess, I mean half the times in movies they put the flaws from film cameras into stuff shot in digital because it apparently feels more realistic.

  • kyoto kidkyoto kid Posts: 41,260
    edited December 1969

    Frank0314 said:
    I just got a GTX 960 with 4GB and it is proving to be great for iray. Time is very precious as a PA cause time vs money earned is important to know if you need to change your workflow. Of course the faster you get done with stuff the less of a loss you will have. Many time in this profession you are looking at making at least minimum wage here in the states so every second counts. As stated by Dumor, a big chunk of time is consumed by hundred of renders to check and build your mats and thumbnails. Promos will normally take you a week to do quality renders. The faster and bigger gpu and cpu you have will greatly decrease your time.

    ....so for CPU rendering, the more CPU threads, the better then as well.

    I've actually froze my system (12 GB) with an out of memory error after submitting the scene of the two girls at the bus stop to LuxRender (Reality 2.5).

    I wish I could find the post where I read that one person using a GTX-Titan (6 GB 2688 cores) was experiencing extreme slowness with a scene that really didn't look all that complex. Again to me it sounded as if something was overloading the Titan's VRAM which kicked it over to the CPU however I couldn't see anything in the scene that would have caused it to occur.

    Still considering the Titan-X as many of my scenes are very detailed and complex. Interesting that Nvidia never released a GTX series GPU with 8 GB to compete with AMD/Sapphire's R9 290X Vapour-X (early rumours about the upgraded GTX 980 mentioned doubling the GPU's memory to 8 GB)

  • kyoto kidkyoto kid Posts: 41,260
    edited December 1969

    HoMart said:
    MEC4D said:
    Yeah I was thinking the same .. would be cool

    Kyoto Kid said:
    MEC4D said:
    Once you close the render window or DS it is gone ...
    however you can close the scene and still resume since the scene is already loaded . To be sure I just checked , I opened another scene and was able to resume the old one ..

    I feel like I might be missing something, but, is it possible to stop a render, close the scene, and open it again later and resume the same render, like with Reality/LUX?

    ...we need a batch render setup.

    this one
    http://www.daz3d.com/batch-render-for-daz-studio-4-and-rib
    works well in 4.8 AND Iray
    if you set to "use scene saved settings", it uses the saved iray settings.
    ...can it perform the process with Daz shut down like Lux/Reality can?

    Also doesn't RIB (RenderMan Interface Bytestream) only support RenderMan based render engines like 3DL?

    The other feature of Lux I liked was preset tone mapping based on actual "real world" film types.

  • Dumor3DDumor3D Posts: 1,316
    edited December 1969

    Kyoto Kid said:
    Frank0314 said:
    I just got a GTX 960 with 4GB and it is proving to be great for iray. Time is very precious as a PA cause time vs money earned is important to know if you need to change your workflow. Of course the faster you get done with stuff the less of a loss you will have. Many time in this profession you are looking at making at least minimum wage here in the states so every second counts. As stated by Dumor, a big chunk of time is consumed by hundred of renders to check and build your mats and thumbnails. Promos will normally take you a week to do quality renders. The faster and bigger gpu and cpu you have will greatly decrease your time.

    ....so for CPU rendering, the more CPU threads, the better then as well.

    I've actually froze my system (12 GB) with an out of memory error after submitting the scene of the two girls at the bus stop to LuxRender (Reality 2.5).

    I wish I could find the post where I read that one person using a GTX-Titan (6 GB 2688 cores) was experiencing extreme slowness with a scene that really didn't look all that complex. Again to me it sounded as if something was overloading the Titan's VRAM which kicked it over to the CPU however I couldn't see anything in the scene that would have caused it to occur.

    Still considering the Titan-X as many of my scenes are very detailed and complex. Interesting that Nvidia never released a GTX series GPU with 8 GB to compete with AMD/Sapphire's R9 290X Vapour-X (early rumours about the upgraded GTX 980 mentioned doubling the GPU's memory to 8 GB)

    At least one scene I have done, and this is really hard to put a finger on... but when I sent it to Lux CPU render, it seemed to have used between 6 and 8 GB of RAM. The same scene sent to Iray was well under 4GB VRAM. It's not an apples to apples thing. Also, 3DL was using about that much RAM for the same scene.

  • pearbearpearbear Posts: 227
    edited December 1969

    Trying our some of the new Iray shader products from the store, including the leather and brocade/satin shaders, Mec4D's Unshaven 2 hair shader, and DimensionTheory's Studio HDRI pack. I really like the Studio HDRI pack, definitely a new favorite go-to light choice for me.

    Other stuff used here is the Capricornia outfit, Cyra hair, and Stonemason's Modular Sci-fi. I also added some spotlights for rim lighting, and the skin shader and character morph are my own.

    sf.jpg
    960 x 1200 - 1M
  • kyoto kidkyoto kid Posts: 41,260
    edited June 2015

    ...I've routinely had 3DL renders peak at 10 - 10.5 GB which sends everything into "slo mo" swapping mode. By that estimate for Iray, it would mean such a scene would be over 5 GB encroaching on 6. 4 GB won't cut it then.

    If I'm going to lay out serious zlotys for a new GPU I would like to make sure that 90% of my scenes will fit in video memory and not default to the CPU & Physical memory. 8 GB actually seems the most optimal for me. Just sad that Nvidia only offers that in the expensive Quadro Kepler line. The AMD/Sapphire R9 290X Vapour-X with 8 GB retails for around 470$, but is useless for Iray.

    Post edited by kyoto kid on
  • KhoryKhory Posts: 3,854
    edited December 1969

    Great render pearbear. Her outfit looks super cool. Really love unexpected mix of fabrics.

  • MEC4DMEC4D Posts: 5,249
    edited December 1969

    twelve people full textured and cloths textured loaded on 4GB ? that is miracle man .. I can have only 2 people naked full textured and that is on 2GB .. but I can have 192 without textures
    Unless all your figures used the same texture maps I don't see how it fit and render , since you still needed maybe 350 MB at last for your render time .


    Dumor3D said:
    As for the 4GB, 6GB or 12GB of VRAM questions:

    I have done a fair amount of rendering now and so far 4GB has been enough. This includes one scene with a full environment with twelve people and cloths loaded. OpenGL was complaining as the scene was big enough that setting it up was getting difficult. That has been the situation so far with my 2 largest scenes.

    However, the use of VRAM is mostly about the size of the texture image files. From what I can tell, Iray is exceptional about recognizing multiple uses of the same texture and seems to only load it once. I do want to try more testing with this. I do know it loves tiling and that positively helps with VRAM use. I have a rather large environment set in the works, that renders just fine on a 2GB card even with one figure in the scene.

    Texture Atlas can help if the scene will not fit. There really isn't a set rule to what will work and what will not work as it really depends on the file sizes of the texture files for a product or the products in the scene. Main differences between Iray and 3DL is 3DL didn't care so much about textures sizes, they loaded into ram or swap. Iray doesn't care much about mesh efficiency and eats hi-poly meshes. I have been adjusting my products to make use of lower rez textures and saving those using more efficient methods.

    By the way, for those using PCs, there is a neat utility called GPU-Z which will display your video card statistics. It's a great way to see how much VRAM is in use during a render. Also, it will show if the card kicks out. And if a scene starts kicking out, you can hide a few items and see how much difference it makes in VRAM use, or even help if you have something loaded with huge textures which may need to be adjusted.

    Anyway, 4GB so far has been great for me. I also have a Titan X, just in case. I think my next card will be a GTX980Ti 6GB. I still believe the GTX980 at 4GB is perhaps the best CUDA cores per dollar with the 980Ti being very close. 2 of those are equal in price but faster than one Titan X by about 25%... as long as you don't break through the 4GB limit. Either way, I would suggest trying out GPU-Z so you can make a more informed decision based on some of your scenes and how you like to work.

    OH, and GPU-Z will also help you find what may be using VRAM on your system. Web browsers, email programs and so forth. A big desktop background image. Windows Aero... it all adds up. Another hint, if you have a two card system, if you put your monitor on the card with less power or VRAM, the system will use that for those other programs. This can easily be 0.5GB of VRAM or more. A second more powerful card can then be used for rendering.

    Hope that helps!

  • pearbearpearbear Posts: 227
    edited December 1969

    Thanks! That Capricornia outfit was hidden lurking in the depths of my runtime, it is a cool design. I especially dig the asymmetrical pants.

  • MEC4DMEC4D Posts: 5,249
    edited June 2015

    All JPG you using as textures will be decompressed so you can't have the same amount of memory used in 3Delight and Iray for rendering plus Iray will take memory for rendering ..

    Let's decompress one character assuming it have only 3 types of textures ( it have more )
    textures 2Kx2K for 4K = 64 MB per texture
    120 MB for color
    120 MB for specular
    120 MB for bump maps

    360 MB per character textures 2K x 12 = 4.3GB ----- 8.6 GB if the all textures was 4K
    now the clothing and geometries etc.. you will end with over 8 GB

    1x 4096 = 64MB
    1x 2048= 32 MB
    1x 1024= 10MB

    But with that I want to say that the amount of VRAM you really need could only be estimated by you and no one else.
    People don't realize that they render with CPU or the card is processing all data via PCI only as it can't calculate everything on its own
    due to low memory so if your budged allow you that, go for as big you can and don't settle for less, as in this case more is better
    4GB will be the pleasant standard for simple scenes if you are not for plan to render army of zombies or other complex scenes
    but I saw people in forum complain with 6GB on a scenes that rendered so slow as it does not fit so your decision is based on your budged .

    Dumor3D said:

    At least one scene I have done, and this is really hard to put a finger on... but when I sent it to Lux CPU render, it seemed to have used between 6 and 8 GB of RAM. The same scene sent to Iray was well under 4GB VRAM. It's not an apples to apples thing. Also, 3DL was using about that much RAM for the same scene.

    Post edited by MEC4D on
  • RAMWolffRAMWolff Posts: 10,256
    edited December 1969

    This is the baby I want but it's sold out all over the place...

    http://www.evga.com/articles/00934/EVGA-GeForce-GTX-980-Ti/

    My only beef is that they only provide one DVI port. I have one monitor that has an HDMI port but it's my default monitor. Last time I hooked it up to an HDMI port it was no longer the default, when booting the second monitor was chosen to display the screens and then the HDMI driver would kick in. Perhaps that's been fixed with Windows 8.1. This was happening with Windows 8. Got so frustrated that I just looked for cards with two DVI connector ports so this would no longer be a frustration. Anyone know if I go for this card when it becomes available again if there is a way to force Windows to load an HDMI driver at boot time?

    Thanks!

  • DestinysGardenDestinysGarden Posts: 2,550
    edited December 1969

    Khory said:
    Great render pearbear. Her outfit looks super cool. Really love unexpected mix of fabrics.

    *Nods* and completely agrees with Khory. All the materials and surfaces really look fabulous!

  • kyoto kidkyoto kid Posts: 41,260
    edited June 2015

    RAMWolff said:
    This is the baby I want but it's sold out all over the place...

    http://www.evga.com/articles/00934/EVGA-GeForce-GTX-980-Ti/

    My only beef is that they only provide one DVI port. I have one monitor that has an HDMI port but it's my default monitor. Last time I hooked it up to an HDMI port it was no longer the default, when booting the second monitor was chosen to display the screens and then the HDMI driver would kick in. Perhaps that's been fixed with Windows 8.1. This was happening with Windows 8. Got so frustrated that I just looked for cards with two DVI connector ports so this would no longer be a frustration. Anyone know if I go for this card when it becomes available again if there is a way to force Windows to load an HDMI driver at boot time?

    Thanks!


    ...but is overclocking really that helpful for rendering? I don't overclock anything on my system.
    Post edited by kyoto kid on
  • MEC4DMEC4D Posts: 5,249
    edited June 2015

    Outputs
    1 x Dual-Link DVI-I
    1 x HDMI 2.0
    3 x DisplayPort 1.2

    Yeah everything gone lol
    http://www.evga.com/Products/ProductList.aspx?type=0&family=GeForce+900+Series+Family&chipset=GTX+980+Ti

    RAMWolff said:
    This is the baby I want but it's sold out all over the place...

    http://www.evga.com/articles/00934/EVGA-GeForce-GTX-980-Ti/

    My only beef is that they only provide one DVI port. I have one monitor that has an HDMI port but it's my default monitor. Last time I hooked it up to an HDMI port it was no longer the default, when booting the second monitor was chosen to display the screens and then the HDMI driver would kick in. Perhaps that's been fixed with Windows 8.1. This was happening with Windows 8. Got so frustrated that I just looked for cards with two DVI connector ports so this would no longer be a frustration. Anyone know if I go for this card when it becomes available again if there is a way to force Windows to load an HDMI driver at boot time?

    Thanks!

    Post edited by MEC4D on
  • MEC4DMEC4D Posts: 5,249
    edited December 1969

    Yeah it is .. I render so much faster just make sure the temperature don't get over 80F too often or it will kill the gpu with the time, usually it stay at max 75F when overclocked to the max allowed
    but the one Richard show is super-clocked , faster gpu, faster calculation and the little bit of Mhz make huge differences

    Kyoto Kid said:
    RAMWolff said:
    This is the baby I want but it's sold out all over the place...

    http://www.evga.com/articles/00934/EVGA-GeForce-GTX-980-Ti/

    My only beef is that they only provide one DVI port. I have one monitor that has an HDMI port but it's my default monitor. Last time I hooked it up to an HDMI port it was no longer the default, when booting the second monitor was chosen to display the screens and then the HDMI driver would kick in. Perhaps that's been fixed with Windows 8.1. This was happening with Windows 8. Got so frustrated that I just looked for cards with two DVI connector ports so this would no longer be a frustration. Anyone know if I go for this card when it becomes available again if there is a way to force Windows to load an HDMI driver at boot time?

    Thanks!


    ...but is overclocking really that helpful for rendering? I don't overclock anything on my system.
  • mjc1016mjc1016 Posts: 15,001
    edited December 1969

    I've also found with factory clocked cards like that, that they generally tend to run well within limits, when it comes to heat. My son's 'superclocked' card usually runs cooler than the 'regular' card he replaced with it.

  • MEC4DMEC4D Posts: 5,249
    edited December 1969

    Yes that is true the cards run much cooler but usually there is different type of cooling for this type of cards and are still covered by the normal warranty .. that why I choice 2 x Titan X 'superclocked' so I don't have to do that manually anymore and keep my warranty safe in case something get wrong with it as I am going to abuse it intensely lol

    mjc1016 said:
    I've also found with factory clocked cards like that, that they generally tend to run well within limits, when it comes to heat. My son's 'superclocked' card usually runs cooler than the 'regular' card he replaced with it.
  • RAMWolffRAMWolff Posts: 10,256
    edited December 1969

    MEC4D said:
    Yeah it is .. I render so much faster just make sure the temperature don't get over 80F too often or it will kill the gpu with the time, usually it stay at max 75F when overclocked to the max allowed
    but the one Richard show is super-clocked , faster gpu, faster calculation and the little bit of Mhz make huge differences

    Kyoto Kid said:
    RAMWolff said:
    This is the baby I want but it's sold out all over the place...

    http://www.evga.com/articles/00934/EVGA-GeForce-GTX-980-Ti/

    My only beef is that they only provide one DVI port. I have one monitor that has an HDMI port but it's my default monitor. Last time I hooked it up to an HDMI port it was no longer the default, when booting the second monitor was chosen to display the screens and then the HDMI driver would kick in. Perhaps that's been fixed with Windows 8.1. This was happening with Windows 8. Got so frustrated that I just looked for cards with two DVI connector ports so this would no longer be a frustration. Anyone know if I go for this card when it becomes available again if there is a way to force Windows to load an HDMI driver at boot time?

    Thanks!


    ...but is overclocking really that helpful for rendering? I don't overclock anything on my system.

    Weird thing is is that the "Power Consumption Overview" info showed that this baby doesn't use any more power than my current card, GeoForce GTX 760. Not sure how NVIDIA pulled that off but the bench marks showed them at the 10 point. I kept looking at the chart thinking "that can't be right" but I guess it is.....

    http://www.tomshardware.com/reviews/nvidia-geforce-gtx-980-ti,4164-7.html

  • Dumor3DDumor3D Posts: 1,316
    edited December 1969

    MEC4D said:
    All JPG you using as textures will be decompressed so you can't have the same amount of memory used in 3Delight and Iray for rendering plus Iray will take memory for rendering ..

    Let's decompress one character assuming it have only 3 types of textures ( it have more )
    textures 2Kx2K for 4K = 64 MB per texture
    120 MB for color
    120 MB for specular
    120 MB for bump maps

    360 MB per character textures 2K x 12 = 4.3GB ----- 8.6 GB if the all textures was 4K
    now the clothing and geometries etc.. you will end with over 8 GB

    1x 4096 = 64MB
    1x 2048= 32 MB
    1x 1024= 10MB

    But with that I want to say that the amount of VRAM you really need could only be estimated by you and no one else.
    People don't realize that they render with CPU or the card is processing all data via PCI only as it can't calculate everything on its own
    due to low memory so if your budged allow you that, go for as big you can and don't settle for less, as in this case more is better
    4GB will be the pleasant standard for simple scenes if you are not for plan to render army of zombies or other complex scenes
    but I saw people in forum complain with 6GB on a scenes that rendered so slow as it does not fit so your decision is based on your budged .

    Dumor3D said:

    At least one scene I have done, and this is really hard to put a finger on... but when I sent it to Lux CPU render, it seemed to have used between 6 and 8 GB of RAM. The same scene sent to Iray was well under 4GB VRAM. It's not an apples to apples thing. Also, 3DL was using about that much RAM for the same scene.

    Mec, your math seems to be fine, but you have made errors in how Iray handles scenes. My testing and real world reality are very different from that simple math. My best guess for this is Iray is a lot more masterful at handling texture files than anything I've used before. It is not a matter of just multiplying out some numbers.

    Attached is a re-render of that scene... Sorry it's just a quicky. I didn't save the original. Also attached is a screen shot of GPU-Z as the render was in progress. Clearly, this 4GB GTX 980 was functioning and under 4GB. In fact both of my 980s were running through the entire render process while my 2GB GTX660s dropped out, as expected. I keep CPU turned off, which is not a suggested practice unless you are really on your game with your card(s). I do apologize for saying "well under 4GB VRAM", as it is closer than I remembered. Sorry, it has been several months since I ran that test.

    I attached another Iray test render with 88 figures in it. This was done on my laptop using a 4GB GTX780M. I did this so long ago, I have no idea about VRAM use. All I do know, is it positively rendered on the card, which was the point of this test. Yeah... ugly but impressive that it worked.

    Also, a lot can be done to optimize texture files. First, some can be gained by losslessly saving them or getting rid of where text can be saved within a jpg. Of course, carefully choosing a proper quality level is important, but after that, there are programs that will re-optimize the Hoffman compression with 0 loss to image quality while cutting file sizes by around 1/3rd. It's a new game. I've been working very hard on texture conservation with so far, great success.

    I sure hope you believe me this time as I do not plan to do a video to show this as it happens. :) And if you do now believe me, your apology is accepted.

    BTW, you are getting some great skin renders. Good work!

    IrayTest88peeps.jpg
    1280 x 720 - 367K
    Sauna12PeepsRender.jpg
    1300 x 1000 - 1M
  • Dumor3DDumor3D Posts: 1,316
    edited December 1969

    OH well, it seems the GPU-Z image didn't get uploaded. :( Dang forum software.

    Sauna12Peeps980.JPG
    400 x 560 - 66K
  • mjc1016mjc1016 Posts: 15,001
    edited December 1969

    RAMWolff said:
    Weird thing is is that the "Power Consumption Overview" info showed that this baby doesn't use any more power than my current card, GeoForce GTX 760. Not sure how NVIDIA pulled that off but the bench marks showed them at the 10 point. I kept looking at the chart thinking "that can't be right" but I guess it is.....

    http://www.tomshardware.com/reviews/nvidia-geforce-gtx-980-ti,4164-7.html

    Actually, it isn't that strange...cards have been holding the same power requirements for quite some time now. Usually with each generation the cards get more out of each Watt used.

    MEC4D said:
    Yes that is true the cards run much cooler but usually there is different type of cooling for this type of cards and are still covered by the normal warranty .. that why I choice 2 x Titan X 'superclocked' so I don't have to do that manually anymore and keep my warranty safe in case something get wrong with it as I am going to abuse it intensely lol

    I've also found with factory clocked cards like that, that they generally tend to run well within limits, when it comes to heat. My son's 'superclocked' card usually runs cooler than the 'regular' card he replaced with it.

    Yeah, I forgot to mention that about the warranty, too...

  • kyoto kidkyoto kid Posts: 41,260
    edited June 2015

    MEC4D said:
    Yeah it is .. I render so much faster just make sure the temperature don't get over 80F too often or it will kill the gpu with the time, usually it stay at max 75F when overclocked to the max allowed
    but the one Richard show is super-clocked , faster gpu, faster calculation and the little bit of Mhz make huge differences

    Kyoto Kid said:
    RAMWolff said:
    This is the baby I want but it's sold out all over the place...

    http://www.evga.com/articles/00934/EVGA-GeForce-GTX-980-Ti/

    My only beef is that they only provide one DVI port. I have one monitor that has an HDMI port but it's my default monitor. Last time I hooked it up to an HDMI port it was no longer the default, when booting the second monitor was chosen to display the screens and then the HDMI driver would kick in. Perhaps that's been fixed with Windows 8.1. This was happening with Windows 8. Got so frustrated that I just looked for cards with two DVI connector ports so this would no longer be a frustration. Anyone know if I go for this card when it becomes available again if there is a way to force Windows to load an HDMI driver at boot time?

    Thanks!


    ...but is overclocking really that helpful for rendering? I don't overclock anything on my system.

    ...for myself, overclocking brings a line from a certain film to mind.

    The candle that burns twice as bright burns half as long
    ---Dr. Eldon Tyrell


    On another note.

    Just saw that the top of the line Quadro GPU (M6000) has recently gone to Maxwell technology as well. Still 12 GB but updated to 3072 CUDA cores like the Titan-X. So the curious one that I am, headed to ebay to see if there is any fallout. Yep, seeing K6000s going for as low as 3,000$ ("buy it now" or "best offer" price) instead of the original 5,000$ list.

    Post edited by kyoto kid on
This discussion has been closed.