What would be necessary to get a network queue render rig?

anonbachanonbach Posts: 16

Hello guys,

I have a RTX 2080 Ti system already (1x) and I find that it is awesome for single scene renders, static renders. If I want to illuminate (max 3 people), animate them at 1080p, 24-30 fps what kind of hardware do I need so that the rendering doesn't take so long to do?

I'm currently looking at this workflow setup:

main rig (current one with Ryzen 5 2600 | RTX 2080Ti for scene setup) --> purpose-built rendering machine with a 2 x RTX 2080Ti and maybe another RTX 2070 for a total of 3x GPUs (because last slot is a PCI-e 3.0 x16 at x4 speeds). Would this allow me to render animations in a decent amount of time using iRay, Optix Prime accelerated? 

Do I need a monitor for that? CPU doesn't matter, right? Would a UPS be necessary? 

I just want the fastest possible rendering machine for my use case so that I don't wait days for animations to finish and hate the idea of using cloud rendering (even if it is faster). 

I know it's a lot to ask, but I'd like some feedback and help please. Thank you.

Post edited by anonbach on
«1

Comments

  • KitsumoKitsumo Posts: 1,216

    I've been looking into this for a while and I've never been happy with the results I've found:

    https://www.daz3d.com/forums/discussion/132951/tutorial-iray-server-render-farm-batch-rendering-for-daz-studio/p1

    https://www.daz3d.com/forums/discussion/229856/iray-cloud-server/p1

    https://www.daz3d.com/forums/discussion/117701/iray-server-possible-or-a-waste-of-time

    https://www.daz3d.com/forums/discussion/201316/farm-render-on-iray-server

    https://www.daz3d.com/forums/discussion/57563/nvidia-iray-server-for-daz-studio

    I don't know all the details, but it sounds like Nvidia charges the user to operate a network render farm in their own home with their own equipment (and I'm sure someone will correct me if I'm wrong). Anyway, I'm not saying it's bad, just that it wasn't for me. Obviously the program is still running, so there must be people using it.

    After that I decided to look into using cryptocoin mining hardware for 3d rendering and I got decent results: https://www.daz3d.com/forums/discussion/269211/it-s-alive-alive-my-pixel-mining-rig . The short answer to your question is if you don't want to pay for Nvidia Iray server, then you're limited to how many GPUs you can connect to your main PC. If you want to use Iray Server, then I guess you can expand a lot more, but I have no idea what their pricing is. Good luck and let me know what you decide to do. I can't afford the high end hardware, but I can live vicariously through others. cheeky

  • anonbachanonbach Posts: 16
    edited October 2018

    Well, to be fair, my main concern isn't really the networking part - I can still render it locally on the secondary rendering rig by moving my entire Daz3D instance over to the seoncd computer and then syncing my .duf files through the network. 

    My main concern is whether even a 2 x 2080 Ti and maybe a 3rd GPU with the RTX 2070 would make short animations not render for days, considering we're talking aboutt at least $2.9K worth of GPU horsepower there aka 11,008 CUDA Cores of raw horsepower to utilize. You think that's enough or do I have to go 3 x RTX 2080Ti and a HEDT platform (for more PCIe lanes)? 

    EDIT: To be fair, a GTX 1080Ti isn't exactly common hardware either, it's still firmly an enthusiast level GPU. 

    Post edited by anonbach on
  • KitsumoKitsumo Posts: 1,216

    I guess it depends on how complex your scenes are, how many iterations per frame, etc. Here's a thread with some current benchmarks on dual 2080ti cards: https://www.daz3d.com/forums/discussion/comment/4009596/#Comment_4009596 . I'd say that it's definitely worth trying with your current setup. linvanchene and outrider42 could tell you a lot more about that than I could.

  • anonbachanonbach Posts: 16
    edited October 2018

    O.O

    Tried to render a simple 100 frame animations at 1080p with two Genesis 8 figures with the default outdoor HDRI, ground on and max ray tracing path of 13....and it takes according to the window 7.5 hrs. 7.5 hours. It's very baffling because I saw that one frame was supposed to only take me a shade under 2 minutes per frame. Where is this 7 hrs coming from? To be fair, I did try to render it as a "movie". Maybe as an image series would be better? 

     

    This is with a RTX 2080 Ti. A freaking $1200 USD GPU. It's beginning to be very humbling for me to realize (even though I already know) how much rendering horsepower is required for feature films like Wreck it Ralph - Disney, Pixar movies. Man, this is really humbling. Really humbling. 

    Post edited by anonbach on
  • I'm not making full feature films, but damn that's surprisingly intensive even for a top of the line GPU. The good news is that it's only using the CUDA cores on the RTX 2080Ti since iRAY does not support RTX features or either the RayTracing Cores or Tensor Cores, so it runs pretty cool max of 75C and an average of 60-65C on 50% fan speed. Really hope RTX features can accelerate this rendering....

  • You might want to consider the Reality render engine.It does distributed rendering. I'm not a huge fan of it due to the hassle of using its material system but I know some people love it.

  • Material system? Does that mean I will have to port over the subsurface, displacement, bump, diffuse and other shaders to make it work every time I want to render? 

  • Sven DullahSven Dullah Posts: 7,621

    Here is a similar thread, I made a few comments, might want to check it out...

    https://www.daz3d.com/forums/discussion/286251/animation-image-render-size#latest

  • Wow, thanks for the link! Do you think having RTX 2080 Ti's in 2 x NVLink (SLI OFF, of course and waiting on potential memory pooling with RTX 2080 Ti) with a tertiary RTX 2070 is overkill? I really enjoy making animations and beginning to do some contract work for a studio atm, so time is kind of key. 

    They mentioned rendering at 4K isn't quite linear in terms of added time - how much longer are we talking about? 20%? 30%? 

    Thank you, Sven Dullah!

     

     

  • Sven DullahSven Dullah Posts: 7,621
    edited October 2018
    anonbach said:

    Wow, thanks for the link! Do you think having RTX 2080 Ti's in 2 x NVLink (SLI OFF, of course and waiting on potential memory pooling with RTX 2080 Ti) with a tertiary RTX 2070 is overkill? I really enjoy making animations and beginning to do some contract work for a studio atm, so time is kind of key. 

    They mentioned rendering at 4K isn't quite linear in terms of added time - how much longer are we talking about? 20%? 30%? 

    Thank you, Sven Dullah!

     

     

    Sorry I'm a Mac - and 3Delight-user so can't help with your rig. But if you decide to have a go at 3DL I'm more than willing to help:)

    About rendertimes,I'm guessing more like 50%, think Iray and 3DL isn't quite comparable in that regard, 3DL being CPU only and handles system memory differently. As I sidenote, 3DL devs are introducing the 3DL cloud this month, I'll see if I can dig up a link;)

    Post edited by Sven Dullah on
  • anonbachanonbach Posts: 16
    edited October 2018

    So, How is CPU rendering on Mac with 3Delight? I'm frequently under the impression that it wasn't optimal for rendering (CPU-rendering, that is). So how is the performance on your end? 

    Man, would it be silly indeed if 3DL is faster than dedicated GPUs in iRAY on Win than CPU only on Mac...

    Post edited by anonbach on
  • Sven DullahSven Dullah Posts: 7,621
    edited October 2018
    anonbach said:

    Wow, thanks for the link! Do you think having RTX 2080 Ti's in 2 x NVLink (SLI OFF, of course and waiting on potential memory pooling with RTX 2080 Ti) with a tertiary RTX 2070 is overkill? I really enjoy making animations and beginning to do some contract work for a studio atm, so time is kind of key. 

    They mentioned rendering at 4K isn't quite linear in terms of added time - how much longer are we talking about? 20%? 30%? 

    Thank you, Sven Dullah!

     

     

    Sorry I'm a Mac - and 3Delight-user so can't help with your rig. But if you decide to have a go at 3DL I'm more than willing to help:)

    About rendertimes, I'm guessing more like 50%, think Iray and 3DL isn't quite comparable in that regard, 3DL being CPU only and handles system memory differently. As I sidenote, 3DL devs are introducing the 3DL cloud this month, I'll see if I can dig up a link;)

    https://www.digitalengineering247.com/article/3delight-cloud-and-3delightnsi-debut-as-new-rendering-tools/

    You might also want to check this out:

    https://www.daz3d.com/forums/discussion/280441/awe-shading-kit-for-daz-studio-and-3delight-commercial/p1

     

    Post edited by Sven Dullah on
  • Is it wrong then, that I hate the concept of cloud rendering? I like my local hardware very much, thank you. 

    Electricity is basically a flat rate for me, becuase I'm in an office. 

    also, they mentioned (1st link) it was to be 2c / min for a 24 core slice. A 24 core slice of what? a CPU? Why would I want CPU rendering again over the highly paralleiized GPU ones, again? 

    How fast is a Xeon Platinum 8160 (24C/48T) be in 3Dlight compared to something like an RTX 2080 Ti in iRAY? This could be what decides whether I want to outsource my rendering to the cloud, as much as I hate the idea. 

  • Also, found this interesting tidbit in an article:

    While I must admit that I don’t see any grain artefacts in the above animation, the scene isn’t complicated and does not contain huge amounts of detail. More complex scenes would need to be evaluated, especially at larger sizes. The “render and forget about it” approach isn’t going to work reliably with Iray – at least not for animations.

    So IRAY is a bad choice for animations because it's not consistent? Thanks. 

  • Sven DullahSven Dullah Posts: 7,621
    edited October 2018

    Well it's up to you really, and your needs. I just wanted to share this piece of info;) And yes the animation was just a test to see the motion, so you're absolutely right, more complex scenes would take a lot longer for sure. As I said I know very little about Iray and GPU rendering, it's not my cup of tea, don't have the patience to tie up my computer for days, if you work professionally you might want to find other solutions. It's all about using the right tools to get the job done, right:)

    And I prefer local hardware too, but if I was to do something more complex I would consider using cloud rendering.

    Post edited by Sven Dullah on
  • Thanks for your insight and everything. Well, technically I offload the processing to the GPU, so the GPU is running in the background like crazy but I turn off the CPU part of it (this dinky little Ryzen ain't gonna make a difference) - I can still use the computer. My plan was to set up scenes on this main machine and then send it over to the dedicated rendering rig that I'm planning on so even if I render animations, that computer will be running, not the primary one. 

     

     

  • Sven DullahSven Dullah Posts: 7,621
    anonbach said:

    Thanks for your insight and everything. Well, technically I offload the processing to the GPU, so the GPU is running in the background like crazy but I turn off the CPU part of it (this dinky little Ryzen ain't gonna make a difference) - I can still use the computer. My plan was to set up scenes on this main machine and then send it over to the dedicated rendering rig that I'm planning on so even if I render animations, that computer will be running, not the primary one. 

     

    I'm sure you'll find a workflow that suits your needs, looking forward to seeing your worksmiley

     

  • KitsumoKitsumo Posts: 1,216
    anonbach said:

    Also, found this interesting tidbit in an article:

    While I must admit that I don’t see any grain artefacts in the above animation, the scene isn’t complicated and does not contain huge amounts of detail. More complex scenes would need to be evaluated, especially at larger sizes. The “render and forget about it” approach isn’t going to work reliably with Iray – at least not for animations.

    So IRAY is a bad choice for animations because it's not consistent? Thanks. 

    Yeah, I looked over that article you quoted. That guy doesn't seem to have any credentials as a 3d technology analyst, at least no more than you or I do. He says mixing slow and fast hardware can give inconsistent results, but gives no examples or proof. I've used a 1080ti + 770 for months now, and before that I used a 770 + 460. He says a larger, more complex scene would produce more grain. Well then it'll just have to render longer. That's how physically based rendering works. I won't go any farther into his article since he's not here to respond, but that just irked me.

    As far as the image size and affect on render time, I'd say take a typical frame from your animation and render it at different resolutions starting from low and going up to 4k or whatever. You can look at the DS log to see exactly how long the initialization time and render time was for each render. You should be able to multiply those times by your total number of frames to get an idea of how long your movie would take at each size.

  • anonbach said:

    Material system? Does that mean I will have to port over the subsurface, displacement, bump, diffuse and other shaders to make it work every time I want to render? 

    Not port over exactly. Some stuff is already defined but far from everything and that stuff has been defined by the community IIRC. So that stuff just can be loaded up like it would for Iray. Everything else you need to go through node by node and define the node's material etc. For the avaerage character you can get started by just choosing any characted in the list, that will at least get the material settings and save you messing around with setting sclera abd toenails and that nonsense. 

    But I'm a terrible advocate for the engine I stopped using it for a reason. There are people who really love it. You should find some of them and discuss it with them.

  • Kitsumo said:

    Also, found this interesting tidbit in an article:

    While I must admit that I don’t see any grain artefacts in the above animation, the scene isn’t complicated and does not contain huge amounts of detail. More complex scenes would need to be evaluated, especially at larger sizes. The “render and forget about it” approach isn’t going to work reliably with Iray – at least not for animations.

    So IRAY is a bad choice for animations because it's not consistent? Thanks. 

    Yeah, I looked over that article you quoted. That guy doesn't seem to have any credentials as a 3d technology analyst, at least no more than you or I do. He says mixing slow and fast hardware can give inconsistent results, but gives no examples or proof. I've used a 1080ti + 770 for months now, and before that I used a 770 + 460. He says a larger, more complex scene would produce more grain. Well then it'll just have to render longer. That's how physically based rendering works. I won't go any farther into his article since he's not here to respond, but that just irked me.

    As far as the image size and affect on render time, I'd say take a typical frame from your animation and render it at different resolutions starting from low and going up to 4k or whatever. You can look at the DS log to see exactly how long the initialization time and render time was for each render. You should be able to multiply those times by your total number of frames to get an idea of how long your movie would take at each size.

    I'm no more an expert on 3d than most people here I have looked into multi GPU rendering using Iray quite a bit and the major downside of mixing slow and fast cards isn't card speed at all. It's the amount of memory on slower cards particularly older generation cards. For instance the GTX 770 only had 2 Gb of VRAM which is likely woefully inadequate for most renders. What happens in a multi GPU setup is every card must be able to load every asset required into its own VRAM or it will not participate in the render. So it is entirely likely that that 770 is only rarely boosting your renders, although tbh the 1080ti is so much faster than a 770 that it is highly unlikely that a 770 could contribute much if anything to accelerating render times even in those low end renders. You might try running the same render with the 770 excluded and timing the render. Considering the time spent trying to load assets into the 770 is time spent not rendering you might be better off just pulling it entirely.

  • KitsumoKitsumo Posts: 1,216
    Kitsumo said:

    Also, found this interesting tidbit in an article:

    While I must admit that I don’t see any grain artefacts in the above animation, the scene isn’t complicated and does not contain huge amounts of detail. More complex scenes would need to be evaluated, especially at larger sizes. The “render and forget about it” approach isn’t going to work reliably with Iray – at least not for animations.

    So IRAY is a bad choice for animations because it's not consistent? Thanks. 

    Yeah, I looked over that article you quoted. That guy doesn't seem to have any credentials as a 3d technology analyst, at least no more than you or I do. He says mixing slow and fast hardware can give inconsistent results, but gives no examples or proof. I've used a 1080ti + 770 for months now, and before that I used a 770 + 460. He says a larger, more complex scene would produce more grain. Well then it'll just have to render longer. That's how physically based rendering works. I won't go any farther into his article since he's not here to respond, but that just irked me.

    As far as the image size and affect on render time, I'd say take a typical frame from your animation and render it at different resolutions starting from low and going up to 4k or whatever. You can look at the DS log to see exactly how long the initialization time and render time was for each render. You should be able to multiply those times by your total number of frames to get an idea of how long your movie would take at each size.

    I'm no more an expert on 3d than most people here I have looked into multi GPU rendering using Iray quite a bit and the major downside of mixing slow and fast cards isn't card speed at all. It's the amount of memory on slower cards particularly older generation cards. For instance the GTX 770 only had 2 Gb of VRAM which is likely woefully inadequate for most renders. What happens in a multi GPU setup is every card must be able to load every asset required into its own VRAM or it will not participate in the render. So it is entirely likely that that 770 is only rarely boosting your renders, although tbh the 1080ti is so much faster than a 770 that it is highly unlikely that a 770 could contribute much if anything to accelerating render times even in those low end renders. You might try running the same render with the 770 excluded and timing the render. Considering the time spent trying to load assets into the 770 is time spent not rendering you might be better off just pulling it entirely.

    I put up a few extra bucks to get the 4Gb version of the 770, so it's not too bad. I just try to keep my scenes from getting too big. On the Sickleyield Iray benchmark, the 1080ti does about 4300 iterations and the 770 does about 600 out of the roughly 5000 total. As you can see from my benchmarks, the 770 does contribute, but I can see how you would consider pulling it. I'm happy with it, and I won't have a card to replace it anytime soon, especially if they keep running sales like this.angry Besides, I'm not too concerned about speed, and to be honest I'm not much of an artist, I just enjoy playing with the technology. I think I have more fun with my computer when it's not working corrrectly.

    As far as memory usage goes, I can't figure out what's going on in Iray. Rendering the Art Deco Hotel Lobby, for example, my system maxes out to 15.4 Gb or so, the 1080ti at 8.9 Gb and the 770 is at 3.9 Gb. Previously I would have thought the 770 wouldn't render the scene, but it does. So I've learned not to get too concerned with memory requirements, Iray is using some kind of Voodoo to make things work.

  • Sven DullahSven Dullah Posts: 7,621
    edited October 2018
    Kitsumo said:

    As far as memory usage goes, I can't figure out what's going on in Iray. Rendering the Art Deco Hotel Lobby, for example, my system maxes out to 15.4 Gb or so, the 1080ti at 8.9 Gb and the 770 is at 3.9 Gb. Previously I would have thought the 770 wouldn't render the scene, but it does. So I've learned not to get too concerned with memory requirements, Iray is using some kind of Voodoo to make things work.

    LOL so it seems! There was a discussion about using Iray to render those large environments with a lot of instances. TangoAlpha estimated that a set he made renders in a couple of minutes. So I can't understand why you can render a still in minutes, but when rendering an animation, a single frame suddenly takes several hours to render. Doesn't make much sense.

    Post edited by Sven Dullah on
  • Kitsumo said:
    Kitsumo said:

    Also, found this interesting tidbit in an article:

    While I must admit that I don’t see any grain artefacts in the above animation, the scene isn’t complicated and does not contain huge amounts of detail. More complex scenes would need to be evaluated, especially at larger sizes. The “render and forget about it” approach isn’t going to work reliably with Iray – at least not for animations.

    So IRAY is a bad choice for animations because it's not consistent? Thanks. 

    Yeah, I looked over that article you quoted. That guy doesn't seem to have any credentials as a 3d technology analyst, at least no more than you or I do. He says mixing slow and fast hardware can give inconsistent results, but gives no examples or proof. I've used a 1080ti + 770 for months now, and before that I used a 770 + 460. He says a larger, more complex scene would produce more grain. Well then it'll just have to render longer. That's how physically based rendering works. I won't go any farther into his article since he's not here to respond, but that just irked me.

    As far as the image size and affect on render time, I'd say take a typical frame from your animation and render it at different resolutions starting from low and going up to 4k or whatever. You can look at the DS log to see exactly how long the initialization time and render time was for each render. You should be able to multiply those times by your total number of frames to get an idea of how long your movie would take at each size.

    I'm no more an expert on 3d than most people here I have looked into multi GPU rendering using Iray quite a bit and the major downside of mixing slow and fast cards isn't card speed at all. It's the amount of memory on slower cards particularly older generation cards. For instance the GTX 770 only had 2 Gb of VRAM which is likely woefully inadequate for most renders. What happens in a multi GPU setup is every card must be able to load every asset required into its own VRAM or it will not participate in the render. So it is entirely likely that that 770 is only rarely boosting your renders, although tbh the 1080ti is so much faster than a 770 that it is highly unlikely that a 770 could contribute much if anything to accelerating render times even in those low end renders. You might try running the same render with the 770 excluded and timing the render. Considering the time spent trying to load assets into the 770 is time spent not rendering you might be better off just pulling it entirely.

    I put up a few extra bucks to get the 4Gb version of the 770, so it's not too bad. I just try to keep my scenes from getting too big. On the Sickleyield Iray benchmark, the 1080ti does about 4300 iterations and the 770 does about 600 out of the roughly 5000 total. As you can see from my benchmarks, the 770 does contribute, but I can see how you would consider pulling it. I'm happy with it, and I won't have a card to replace it anytime soon, especially if they keep running sales like this.angry Besides, I'm not too concerned about speed, and to be honest I'm not much of an artist, I just enjoy playing with the technology. I think I have more fun with my computer when it's not working corrrectly.

    As far as memory usage goes, I can't figure out what's going on in Iray. Rendering the Art Deco Hotel Lobby, for example, my system maxes out to 15.4 Gb or so, the 1080ti at 8.9 Gb and the 770 is at 3.9 Gb. Previously I would have thought the 770 wouldn't render the scene, but it does. So I've learned not to get too concerned with memory requirements, Iray is using some kind of Voodoo to make things work.

    Based on Nvidia's documentation and my own experiments that's not supposed to work. Are you using the 1080ti to do your video output during renders?

  • KitsumoKitsumo Posts: 1,216
    edited October 2018
    Kitsumo said:
    Kitsumo said:

    Also, found this interesting tidbit in an article:

    While I must admit that I don’t see any grain artefacts in the above animation, the scene isn’t complicated and does not contain huge amounts of detail. More complex scenes would need to be evaluated, especially at larger sizes. The “render and forget about it” approach isn’t going to work reliably with Iray – at least not for animations.

    So IRAY is a bad choice for animations because it's not consistent? Thanks. 

    Yeah, I looked over that article you quoted. That guy doesn't seem to have any credentials as a 3d technology analyst, at least no more than you or I do. He says mixing slow and fast hardware can give inconsistent results, but gives no examples or proof. I've used a 1080ti + 770 for months now, and before that I used a 770 + 460. He says a larger, more complex scene would produce more grain. Well then it'll just have to render longer. That's how physically based rendering works. I won't go any farther into his article since he's not here to respond, but that just irked me.

    As far as the image size and affect on render time, I'd say take a typical frame from your animation and render it at different resolutions starting from low and going up to 4k or whatever. You can look at the DS log to see exactly how long the initialization time and render time was for each render. You should be able to multiply those times by your total number of frames to get an idea of how long your movie would take at each size.

    I'm no more an expert on 3d than most people here I have looked into multi GPU rendering using Iray quite a bit and the major downside of mixing slow and fast cards isn't card speed at all. It's the amount of memory on slower cards particularly older generation cards. For instance the GTX 770 only had 2 Gb of VRAM which is likely woefully inadequate for most renders. What happens in a multi GPU setup is every card must be able to load every asset required into its own VRAM or it will not participate in the render. So it is entirely likely that that 770 is only rarely boosting your renders, although tbh the 1080ti is so much faster than a 770 that it is highly unlikely that a 770 could contribute much if anything to accelerating render times even in those low end renders. You might try running the same render with the 770 excluded and timing the render. Considering the time spent trying to load assets into the 770 is time spent not rendering you might be better off just pulling it entirely.

    I put up a few extra bucks to get the 4Gb version of the 770, so it's not too bad. I just try to keep my scenes from getting too big. On the Sickleyield Iray benchmark, the 1080ti does about 4300 iterations and the 770 does about 600 out of the roughly 5000 total. As you can see from my benchmarks, the 770 does contribute, but I can see how you would consider pulling it. I'm happy with it, and I won't have a card to replace it anytime soon, especially if they keep running sales like this.angry Besides, I'm not too concerned about speed, and to be honest I'm not much of an artist, I just enjoy playing with the technology. I think I have more fun with my computer when it's not working corrrectly.

    As far as memory usage goes, I can't figure out what's going on in Iray. Rendering the Art Deco Hotel Lobby, for example, my system maxes out to 15.4 Gb or so, the 1080ti at 8.9 Gb and the 770 is at 3.9 Gb. Previously I would have thought the 770 wouldn't render the scene, but it does. So I've learned not to get too concerned with memory requirements, Iray is using some kind of Voodoo to make things work.

    Based on Nvidia's documentation and my own experiments that's not supposed to work. Are you using the 1080ti to do your video output during renders?

    Yes, the 1080ti is my main display card. My theory on why it works is that both cards load the main props in the camera's view, full and uncompressed(or at least at the same compression ratio). But for the off camera stuff that still needs to be calculated for shadows and reflections, the 770 could be loading highly compressed textures for those, since they don't need to be that clear just for reflections and shadows. I'm guessing for a reflection that's super blurry, like the back wall of the elevators, the textures loaded for surrounding objects can be super lo-res because you can't make out any details anyway. I'm guessing that for a scene like that, most of the textures loaded are going to be for stuff that's not even in front of the camera. If I had to guess how/why it works, that would be my guess. Maybe one day I'll try it with the GTX 460 and see if that can render it.

    Iray Preview

    Edit: Rendering with just the 770 works fine. It's still maxed out at 3.9Gb and the system ram is at 15.2, so I think it is texture compression. If I get a chance later I'll try a render with each card and see if the log shows anything different for the textures loaded.

    Iray Preview.jpg
    1920 x 1040 - 397K
    Post edited by Kitsumo on
  • That is definitely not how iray works. The reason I asked if the 1080ti was your display card was how did determine how much of the VRAM was in use by Iray and how much was in use for general video output?

    There is no way the GPU can compress or uncompress textures etc. in VRAM. Those reflections assuming they are reflections are created by ray tracing the light boucing off the actual objects onto the reflective surfaces there are no textures loaded for that. You can prove this yourself by moving the camera. You'll get different reflections. There is no way there could be a set of infinite textures representing all the possible reflections based on the all the possible camera positions that could see those reflective surfaces.

    If the 770 rendered the scene using 3.9 Gb and it wasn't driving your video output then that was the scene's total VRAM usage , assuming you also disabled CPU rendering, which isn't terribly surprising based on the image of the scene you provided it looks like a fairly bare elevator lobby with no characters in it. I have no idea what was using 15.2 Gb of system RAM that would be based on what else you had running or if you had CPU rendering turned on.

  • KitsumoKitsumo Posts: 1,216
    edited November 2018

    I think you misunderstood what I was saying. Both cards are calculating reflections in real time, that's how PBR works. To do that they both have to have geometry and textures loaded for everything in the scene, in order to calculate reflections. If that's the case, there's no reason the 770 can't load a lower resolution texture to save memory. Look at the back of the elevator reflecting the check-in desk, there's no way you can tell if that's a high resolution texture on that desk. Iray definitely does offer texture compression, why else would there be a setting for it?

    iray preview

    Anyway, I connected the monitor to the 770 and it renders fine. I can deselect the 1080ti, or I could take it out completely and I'm sure the 770 would still render. I'm not saying I know exactly how it works, but it does work and I'm just trying to figure out how/why. I don't want to sound unfriendly or anything, but it seemed like you didn't get what I was saying clearly.

    iray preview

    Then I decided to increase the texture compression values to something ridiculous. So as far as I know, textures will be compressed only if they're above the threshold. Anyway, the results are open to interpretation, but from what I can see the 770 loads all the textures it can, but never actually starts rendering. The 1080ti loads 10Gb whereas before it only used 8 or 9. System RAM is still pegged out (DS is using 13.9 and the rest is mostly Firefox). In both images, the 770 is the main display card. 

    Iray Preview2.jpg
    1917 x 1041 - 546K
    Iray Preview3.jpg
    1917 x 1041 - 290K
    Post edited by Kitsumo on
  • All settings for a render are universal for the render. They are not card specific. One card cannot do one thing and another do something different. Every card in the render has to work from the exact same resources or in the rendered image the results won't match. 

    Your claims are directly contrary to many other people's, including my, results and Nvidia's own documentation.I think what happened is the 770 failed to render and the CPU rendered the scene instead. You just didn't notice because the 1080ti crushed it so face you hardly noticed the slowdown. That's why DS was taking up so much system RAM. DS should never consume that much RAM.

  • KitsumoKitsumo Posts: 1,216

    Everything you say makes sense. I would certainly think that every card needs to have the exact set of data. But something is happening, unless Afterburner is lying about which cards are doing what. Hopefully I'll fool around with it some more this weekend, heck I might install the 460 1Gb card to see if that works with anything. I think Iray certainly has memory limits, but I don't think they're as strict as everyone seems to think. If I can come up with a reasonable way to test it, I'll start it in another thread. We've pulled this one far enough off topic. Sure I could be wrong. I've certainly been wrong in the past, could be wrong now, and will definitely be wrong in the future. Anyone who's afraid of being wrong will never accomplish anything worthwile.

    Actually, I can't think of any way to really test memory usage. I'd need a model that's only about 1Gb or so including geometry and textures. But I don't know if there's any way to know for sure if Iray is compressing textures or not. Well, that's it for my lunch break. Talk to ya later.

  • I think the telling part is the memory usage. DS has a well known memory leak during renders. Had you run several renders in a row without closing DS before you ran this test? That would perfectly explain the results you display.

  • Whoa guys didn't get a chance to come back to see how it's going. Interesting. 

Sign In or Register to comment.