New RTX-2070 Super, little to no render time improvement over CPU. Why?

2

Comments

  • jukingeojukingeo Posts: 711
    lilweep said:

    I have a question about optimising GPU memory:  If you have things in your scene hidden, do they still contribute to the GPU limit or no?  (I assume no?)

    Items with Visible or Visible in Render off are not passed to the render (items that are made invisible though surface properties, however, are).

    lilweep said:

    Also another question: Do you need high subD to take advantage of some fur shaders that use displacement?

    For Iray, yes as displacement needs vertices.

    Ohhhhhhhhhh!  S o Wait wait!  Clicking on the eyeball icon in the scene tab isn't going to cut it?  You have to go down the setting s for render and shut it off that way?

    I do like to use higher sub d for single characters as I often use skin details such as veins, skin blemishes, freckles, and to show muscular details.  So that is why characters that are up front I set higher.

  • jukingeo said:
    lilweep said:

    I have a question about optimising GPU memory:  If you have things in your scene hidden, do they still contribute to the GPU limit or no?  (I assume no?)

    Items with Visible or Visible in Render off are not passed to the render (items that are made invisible though surface properties, however, are).

    lilweep said:

    Also another question: Do you need high subD to take advantage of some fur shaders that use displacement?

    For Iray, yes as displacement needs vertices.

    Ohhhhhhhhhh!  S o Wait wait!  Clicking on the eyeball icon in the scene tab isn't going to cut it?  You have to go down the setting s for render and shut it off that way?

    The eyeball is the same as the Visible switch (the Parameters pane lets you hide multiple items at once, though).

    jukingeo said:

    I do like to use higher sub d for single characters as I often use skin details such as veins, skin blemishes, freckles, and to show muscular details.  So that is why characters that are up front I set higher.

     

  • IsaacNewtonIsaacNewton Posts: 1,300
    jukingeo said:
    Falco said:

     

    That totally sucks!  I was under the impression that the purpose for linking cards is to add the memory.   So then what is the purpose of linking cards?   Now on the flipside.  If you have a dual processor motherboard and two i9 processors would that speed up render times over one cpu?

    As far as I know, you should not link GPUs for use with DS. DAZ Studio does not support SLI (unless that has changed recently).

    I use two 1080 ti (11Gb) GPU cards on my machine. The scene limit is still 11 Gb as DS fits the whole scene into each GPU. The advantage of two GPUs is that you have twice as many CUDA cores working, so the render is twice as fast (at least in principle).

  • IsaacNewtonIsaacNewton Posts: 1,300

    To DAZ3d, how about building a version of Scene Optimizer (https://www.daz3d.com/scene-optimizer) in to DS?

  • jukingeojukingeo Posts: 711
    jukingeo said:
    lilweep said:

    I have a question about optimising GPU memory:  If you have things in your scene hidden, do they still contribute to the GPU limit or no?  (I assume no?)

    Items with Visible or Visible in Render off are not passed to the render (items that are made invisible though surface properties, however, are).

    lilweep said:

    Also another question: Do you need high subD to take advantage of some fur shaders that use displacement?

    For Iray, yes as displacement needs vertices.

    Ohhhhhhhhhh!  S o Wait wait!  Clicking on the eyeball icon in the scene tab isn't going to cut it?  You have to go down the setting s for render and shut it off that way?

    The eyeball is the same as the Visible switch (the Parameters pane lets you hide multiple items at once, though).

    jukingeo said:

    I do like to use higher sub d for single characters as I often use skin details such as veins, skin blemishes, freckles, and to show muscular details.  So that is why characters that are up front I set higher.

     

    Ok so the eyeball icon is shutting the item off from rendering, right?  I just wanted to make sure I was turning off what isn't visible correctly.

     

    jukingeo said:
    Falco said:

     

    That totally sucks!  I was under the impression that the purpose for linking cards is to add the memory.   So then what is the purpose of linking cards?   Now on the flipside.  If you have a dual processor motherboard and two i9 processors would that speed up render times over one cpu?

    As far as I know, you should not link GPUs for use with DS. DAZ Studio does not support SLI (unless that has changed recently).

    I use two 1080 ti (11Gb) GPU cards on my machine. The scene limit is still 11 Gb as DS fits the whole scene into each GPU. The advantage of two GPUs is that you have twice as many CUDA cores working, so the render is twice as fast (at least in principle).

    So the only benefit to multiple cards is a speed boost?

    thanks Geo

  • SpottedKittySpottedKitty Posts: 7,232
    jukingeo said:
    I have been thinking about getting a 1080ti second hand as that card is a bit better than the 2970 Super.  The extra 3gig of ram should make a difference. It also has more cuda cores.

    You can't really compare between NVidia card "generations" as easily as that. Different VRAM types run at different speeds, CUDA cores have been getting faster and more efficient with each new model, and the newest 20-series cards also have the advantage of RTX, which gives an extra render speed boost all on its own.

  • jukingeo said:
    Falco said:

     

    That totally sucks!  I was under the impression that the purpose for linking cards is to add the memory.   So then what is the purpose of linking cards?   Now on the flipside.  If you have a dual processor motherboard and two i9 processors would that speed up render times over one cpu?

    As far as I know, you should not link GPUs for use with DS. DAZ Studio does not support SLI (unless that has changed recently).

    I use two 1080 ti (11Gb) GPU cards on my machine. The scene limit is still 11 Gb as DS fits the whole scene into each GPU. The advantage of two GPUs is that you have twice as many CUDA cores working, so the render is twice as fast (at least in principle).

    It's nVidia's Iray that does not support SLI, rather than DS.

  • jukingeo said:
    jukingeo said:
    lilweep said:

    I have a question about optimising GPU memory:  If you have things in your scene hidden, do they still contribute to the GPU limit or no?  (I assume no?)

    Items with Visible or Visible in Render off are not passed to the render (items that are made invisible though surface properties, however, are).

    lilweep said:

    Also another question: Do you need high subD to take advantage of some fur shaders that use displacement?

    For Iray, yes as displacement needs vertices.

    Ohhhhhhhhhh!  S o Wait wait!  Clicking on the eyeball icon in the scene tab isn't going to cut it?  You have to go down the setting s for render and shut it off that way?

    The eyeball is the same as the Visible switch (the Parameters pane lets you hide multiple items at once, though).

    jukingeo said:

    I do like to use higher sub d for single characters as I often use skin details such as veins, skin blemishes, freckles, and to show muscular details.  So that is why characters that are up front I set higher.

     

    Ok so the eyeball icon is shutting the item off from rendering, right?  I just wanted to make sure I was turning off what isn't visible correctly.

    Yes, and I was forgetting that you can now use modifier keys to hide/show multiple items too (which was stupid since I'd used th feature just last night): https://www.daz3d.com/forums/discussion/comment/4926836/#Comment_4926836 (ctrl(Win)/cmd(Mac) click an eye to hide/show that node and its children, if any).

    jukingeo said:

     

    jukingeo said:
    Falco said:

     

    That totally sucks!  I was under the impression that the purpose for linking cards is to add the memory.   So then what is the purpose of linking cards?   Now on the flipside.  If you have a dual processor motherboard and two i9 processors would that speed up render times over one cpu?

    As far as I know, you should not link GPUs for use with DS. DAZ Studio does not support SLI (unless that has changed recently).

    I use two 1080 ti (11Gb) GPU cards on my machine. The scene limit is still 11 Gb as DS fits the whole scene into each GPU. The advantage of two GPUs is that you have twice as many CUDA cores working, so the render is twice as fast (at least in principle).

    So the only benefit to multiple cards is a speed boost?

    thanks Geo

     

  • CUDA cannot be directly compared across microarchitecture generations. The 1080ti and 2070 Super have roughly equivalent render performance.

    Also a render doesn't fall back to CPU if the scene is too big for a smaller VRAM card. That card will simply be dropper from the render. I have a 2070 and a 1080ti. I try to keep scenes under 8Gb so both work but when it goes over the 1080ti still renders.

  • jukingeojukingeo Posts: 711

    CUDA cannot be directly compared across microarchitecture generations. The 1080ti and 2070 Super have roughly equivalent render performance.

    Also a render doesn't fall back to CPU if the scene is too big for a smaller VRAM card. That card will simply be dropper from the render. I have a 2070 and a 1080ti. I try to keep scenes under 8Gb so both work but when it goes over the 1080ti still renders.

    Well if they are about the same speed wise then that is fine, still I would think having the extra 3gig of ram is a help.  It would seem that if you had multiple cards, the render capacity goes the other way and favors the larger capacity card.   So in all a Titan card would win out here, but at such great cost.  That card costs as much as my whole machine.

     

    Well pricing out a full computer right now is not in the near future, so for now I am just going to get a 1080ti second hand on Ebay.  Hopefully I will have better luck with that card.  Being older, it is supported on Linux, which I need since I have a dual boot machine and normally I work in Linux for everyday tasks.  I mainly only use Windows for graphic arts.

  • jukingeo said:

    CUDA cannot be directly compared across microarchitecture generations. The 1080ti and 2070 Super have roughly equivalent render performance.

    Also a render doesn't fall back to CPU if the scene is too big for a smaller VRAM card. That card will simply be dropper from the render. I have a 2070 and a 1080ti. I try to keep scenes under 8Gb so both work but when it goes over the 1080ti still renders.

    Well if they are about the same speed wise then that is fine, still I would think having the extra 3gig of ram is a help.  It would seem that if you had multiple cards, the render capacity goes the other way and favors the larger capacity card.   So in all a Titan card would win out here, but at such great cost.  That card costs as much as my whole machine.

    I'm not sure what you are saying here - the ability of the card affects quality only if, without it, the render stops due to hitting the sample limit before the convrgence limit. If two renders stop at the same convergence limit they will be the same, regardless of the hardware used to create them (barring the use of the denoiser, which will run only on the GPU).

    jukingeo said:

     

    Well pricing out a full computer right now is not in the near future, so for now I am just going to get a 1080ti second hand on Ebay.  Hopefully I will have better luck with that card.  Being older, it is supported on Linux, which I need since I have a dual boot machine and normally I work in Linux for everyday tasks.  I mainly only use Windows for graphic arts.

     

  • jukingeojukingeo Posts: 711
    jukingeo said:

    CUDA cannot be directly compared across microarchitecture generations. The 1080ti and 2070 Super have roughly equivalent render performance.

    Also a render doesn't fall back to CPU if the scene is too big for a smaller VRAM card. That card will simply be dropper from the render. I have a 2070 and a 1080ti. I try to keep scenes under 8Gb so both work but when it goes over the 1080ti still renders.

    Well if they are about the same speed wise then that is fine, still I would think having the extra 3gig of ram is a help.  It would seem that if you had multiple cards, the render capacity goes the other way and favors the larger capacity card.   So in all a Titan card would win out here, but at such great cost.  That card costs as much as my whole machine.

    I'm not sure what you are saying here - the ability of the card affects quality only if, without it, the render stops due to hitting the sample limit before the convrgence limit. If two renders stop at the same convergence limit they will be the same, regardless of the hardware used to create them (barring the use of the denoiser, which will run only on the GPU).

    jukingeo said:

     

    Well pricing out a full computer right now is not in the near future, so for now I am just going to get a 1080ti second hand on Ebay.  Hopefully I will have better luck with that card.  Being older, it is supported on Linux, which I need since I have a dual boot machine and normally I work in Linux for everyday tasks.  I mainly only use Windows for graphic arts.

     

    The 1080 ti is 11 gig as opposed to 8 on the 2070. So it holds a larger file to render, correct?

  • jukingeo said:
    jukingeo said:

    CUDA cannot be directly compared across microarchitecture generations. The 1080ti and 2070 Super have roughly equivalent render performance.

    Also a render doesn't fall back to CPU if the scene is too big for a smaller VRAM card. That card will simply be dropper from the render. I have a 2070 and a 1080ti. I try to keep scenes under 8Gb so both work but when it goes over the 1080ti still renders.

    Well if they are about the same speed wise then that is fine, still I would think having the extra 3gig of ram is a help.  It would seem that if you had multiple cards, the render capacity goes the other way and favors the larger capacity card.   So in all a Titan card would win out here, but at such great cost.  That card costs as much as my whole machine.

    I'm not sure what you are saying here - the ability of the card affects quality only if, without it, the render stops due to hitting the sample limit before the convrgence limit. If two renders stop at the same convergence limit they will be the same, regardless of the hardware used to create them (barring the use of the denoiser, which will run only on the GPU).

    jukingeo said:

     

    Well pricing out a full computer right now is not in the near future, so for now I am just going to get a 1080ti second hand on Ebay.  Hopefully I will have better luck with that card.  Being older, it is supported on Linux, which I need since I have a dual boot machine and normally I work in Linux for everyday tasks.  I mainly only use Windows for graphic arts.

     

    The 1080 ti is 11 gig as opposed to 8 on the 2070. So it holds a larger file to render, correct?

    Yes, which means that a render that drops to CPU on the 2070 may not on the 1080Ti, so one may "complete" (reach the target convergence ratio) sooner than the other, but unless it exceeds the maximum time settings the other one will get there in the end too. The reverse is true if a render fits in memory on both cards. That will make a diffrene to quality only if the time (or iterations) limit is reached before the convergence limit - it isn't that one card gives inherently superior renders.

  • jukingeojukingeo Posts: 711
    jukingeo said:
    \The 1080 ti is 11 gig as opposed to 8 on the 2070. So it holds a larger file to render, correct?

    Yes, which means that a render that drops to CPU on the 2070 may not on the 1080Ti, so one may "complete" (reach the target convergence ratio) sooner than the other, but unless it exceeds the maximum time settings the other one will get there in the end too. The reverse is true if a render fits in memory on both cards. That will make a diffrene to quality only if the time (or iterations) limit is reached before the convergence limit - it isn't that one card gives inherently superior renders.

    Well, I have the 1080Ti coming now and being older it is supported in my version of Linux as well, which was another issue I had with the 2070.  I went back to look at my files and many of them were pretty close to the 8gig mark so I am hoping they will fit with 11gig.  Still I am going to take the optimization into consideration, as that would be a big help GPU or CPU.  While I do still want a dedicated computer, the thing is just moving into a new home, I been spending too much money getting things in order here.  So I had to stick with a cheaper alternative and the 1080 Ti seems to be it.

     

     

  • jukingeojukingeo Posts: 711
    edited January 2020

    UPDATE:

    Hello All!

    I received my EVGA GTX Black 1080Ti and installed it.  I went back to try out the renders above to test.  I managed to get the outdoor setting to work on the GPU as I had noticed one of the characters was set to sub d of 2 (the guy).  Once I set him to 1, the render started to process under GPU power.   It seems pretty fast!  Wow!  This was an over 2 hour render with the cpu only and it went through the 1080 ti at 27minutes exactly. It hit 90% conversion at 16minutes 20 seconds.  Not bad for less than a half hour.   It did it with much less noise too since the CPU fan doesn't ramp up during a GPU render.  I have come to know this with the previous card that if I hear the CPU fan ramp up, then I know the GPU has dropped out and the CPU has taken over.

    The cute furry little lady attached below took me just a hair over an hour with CPU power at subdivision 2.  It took only 11 minutes with the 1080ti at subdivision 3 and sub-d 5 (the highest the figure would go):

     

    Now going back to the the indoor render above with the three characters, sadly that one dropped out and went to CPU power :(.  I had it set for 1800 pixels across, so I am going to drop that down to 1700.  The character up front is at subdivision 2 and the two in the back are at subdivision 1.  So I am going to see how that works now...

    Ok, it dropped out again.  Looking at the log file, I have noticed something odd.  The memory consumption is no where near the 11gig of the card (see highlights in red).  Highlights in blue show where the swichover took place.

    2020-01-12 14:43:37.410 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Allocating 1-layer frame buffer
    2020-01-12 14:43:37.426 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Using batch scheduling, caustic sampler disabled
    2020-01-12 14:43:37.426 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Initializing local rendering.
    2020-01-12 14:43:37.441 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Rendering with 1 device(s):
    2020-01-12 14:43:37.441 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info :     CUDA device 0 (GeForce GTX 1080 Ti)
    2020-01-12 14:43:37.441 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Rendering...
    2020-01-12 14:43:37.441 Iray [VERBOSE] - IRAY:RENDER ::   1.2   IRAY   rend progr: CUDA device 0 (GeForce GTX 1080 Ti): Processing scene...
    2020-01-12 14:43:37.738 Iray [INFO] - IRAY:RENDER ::   1.3   IRAY   rend info : Initializing OptiX Prime for CUDA device 0
    2020-01-12 14:43:37.738 Iray [VERBOSE] - IRAY:RENDER ::   1.3   IRAY   rend stat : Geometry memory consumption: 1.28015 GiB (device 0), 0 B (host)
    2020-01-12 14:46:54.875 Iray [INFO] - IRAY:RENDER ::   1.3   IRAY   rend info : Importing lights for motion time 0
    2020-01-12 14:46:54.922 Iray [VERBOSE] - IRAY:RENDER ::   1.3   IRAY   rend stat : Texture memory consumption: 4.28777 GiB for 394 bitmaps (device 0)
    2020-01-12 14:46:55.920 Iray [INFO] - IRAY:RENDER ::   1.3   IRAY   rend info : Initializing light hierarchy.
    2020-01-12 14:47:01.318 Iray [INFO] - IRAY:RENDER ::   1.3   IRAY   rend info : Light hierarchy initialization took 5.395s
    2020-01-12 14:47:01.349 Iray [VERBOSE] - IRAY:RENDER ::   1.3   IRAY   rend stat : Lights memory consumption: 33.8823 MiB (device 0)
    2020-01-12 14:47:01.364 Iray [VERBOSE] - IRAY:RENDER ::   1.3   IRAY   rend stat : Material measurement memory consumption: 0 B (GPU)
    2020-01-12 14:47:02.066 Iray [VERBOSE] - IRAY:RENDER ::   1.3   IRAY   rend stat : Materials memory consumption: 1.2794 MiB (GPU)
    2020-01-12 14:47:02.066 Iray [VERBOSE] - IRAY:RENDER ::   1.3   IRAY   rend stat : PTX code (450 KiB) for SM 6.1 generated in 0.699s
    2020-01-12 14:47:08.915 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(305): Iray [ERROR] - IRAY:RENDER ::   1.3   IRAY   rend error: Unable to allocate 72000000 bytes from 30392320 bytes of available device memory
    2020-01-12 14:47:08.930 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(305): Iray [ERROR] - IRAY:RENDER ::   1.2   IRAY   rend error: CUDA device 0 (GeForce GTX 1080 Ti): Scene setup failed
    2020-01-12 14:47:08.930 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(305): Iray [ERROR] - IRAY:RENDER ::   1.2   IRAY   rend error: CUDA device 0 (GeForce GTX 1080 Ti): Device failed while rendering
    2020-01-12 14:47:08.930 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(305): Iray [WARNING] - IRAY:RENDER ::   1.2   IRAY   rend warn : All available GPUs failed.
    2020-01-12 14:47:08.930 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(305): Iray [WARNING] - IRAY:RENDER ::   1.2   IRAY   rend warn : No devices activated. Enabling CPU fallback.
    2020-01-12 14:47:08.930 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(305): Iray [ERROR] - IRAY:RENDER ::   1.2   IRAY   rend error: All workers failed: aborting render
    2020-01-12 14:47:08.946 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CPU: using 8 cores for rendering
    2020-01-12 14:47:08.946 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Rendering with 1 device(s):
    2020-01-12 14:47:08.946 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info :     CPU
    2020-01-12 14:47:08.946 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Rendering...
    2020-01-12 14:47:08.946 Iray [VERBOSE] - IRAY:RENDER ::   1.2   IRAY   rend progr: CPU: Processing scene...
    2020-01-12 14:47:08.946 Iray [INFO] - IRAY:RENDER ::   1.4   IRAY   rend info : Initializing Embree
    2020-01-12 14:47:53.734 Iray [VERBOSE] - IRAY:RENDER ::   1.4   IRAY   rend stat : Native CPU code generated in 0.739s

     

    So my question now is, why did the GPU kick out early when no where near the full memory was used.  I understand that maybe a gig or two is used by the card for operations, but I don't think the card should drop out with an 8gig (or so) file.  Would it?  I don't know.

    Any info would be appreciated.

    Thank you,

    Geo

     

     

     

     

     

     

    Tigress-1080ti11mins.png
    1391 x 1800 - 5M
    Post edited by Chohole on
  • RobinsonRobinson Posts: 751
    jukingeo said:

    It seems gpu memory is the big problem and if you want details, then one is forking over $2500 for a Titan.  I have been thinking about getting a 1080ti second hand as that card is a bit better than the 2970 Super.  The extra 3gig of ram should make a difference. It also has more cuda cores.

    Is a 1080ti better?  Yes it has more RAM but it has no RTX asics. I can't believe it's better at RT than the 2070 super.

  • Robinson said:
    jukingeo said:

    It seems gpu memory is the big problem and if you want details, then one is forking over $2500 for a Titan.  I have been thinking about getting a 1080ti second hand as that card is a bit better than the 2970 Super.  The extra 3gig of ram should make a difference. It also has more cuda cores.

    Is a 1080ti better?  Yes it has more RAM but it has no RTX asics. I can't believe it's better at RT than the 2070 super.

    Besides having less VRAM the performance of the 2070 Super roughly matches the 1080ti before RT is taken into account. there are tons of benchmarks in games showing this. For iRay rendering you get the RT HW which can be a massive speed increase.

    At current prices the 2070 Super is a far better deal than a used 1080ti.

  • jukingeojukingeo Posts: 711
    edited January 2020
    Robinson said:
    jukingeo said:

    It seems gpu memory is the big problem and if you want details, then one is forking over $2500 for a Titan.  I have been thinking about getting a 1080ti second hand as that card is a bit better than the 2970 Super.  The extra 3gig of ram should make a difference. It also has more cuda cores.

    Is a 1080ti better?  Yes it has more RAM but it has no RTX asics. I can't believe it's better at RT than the 2070 super.

    Besides having less VRAM the performance of the 2070 Super roughly matches the 1080ti before RT is taken into account. there are tons of benchmarks in games showing this. For iRay rendering you get the RT HW which can be a massive speed increase.

    At current prices the 2070 Super is a far better deal than a used 1080ti.

    Overall I don't notice much of a difference except on my pocketbook. I saved $150 on the 1080ti, and it does seem to fit a bit more room in the ram.  However, as to my question above, I am curious as to why the GPU dropped out on that render when it seems like there was enough memory. 

    This basic render only took 5 minutes:

    Looks pretty good to me.

    Alley-Centuar-SexyPose1.png
    1800 x 1800 - 6M
    Post edited by Chohole on
  • RobinsonRobinson Posts: 751

    Impossible to say without the scene, and access to all of the elements within it.

  • I already tried to help you. You ignored the help from me and many others.

  • TooncesToonces Posts: 919

    Geometry memory consumption: 1.28015 GiB (device 0), 0 B (host)
    2020-01-12 14:46:54.875 Iray [INFO] - IRAY:RENDER ::   1.3   IRAY   rend info : Importing lights for motion time 0
    2020-01-12 14:46:54.922 Iray [VERBOSE] - IRAY:RENDER ::   1.3   IRAY   rend stat : Texture memory consumption: 4.28777 GiB for 394 bitmaps (device 0)

    Those two numbers in red seem like the culprit if it's dropping to CPU with a 1080TI. Is it less than 11 gig? Certainly, but other factors are involved. For example, in Render Settings > Advanced, you could have Texture Compression set too high which can *quickly* eat GPU memory.

    As others have suggested, Scene Optimizer cuts textures in half (which usually has no effect on render quality unless doing an extreme close-up). It's an inexpensive product to get the most out of your GPU (by ensuring doesn't drop to CPU).

    I also use the Iray Memory Assistant product since it lets me spot Scene elements with heavy textures or too-high subdivision. It isn't a great product for estimating how much of memory you'll use on the GPU (nothing accomplishes that unfortunately), but it gives ya some easy targets for making your scene fit.

    Just remember you have to restart Daz between each attempt (since dropping to CPU once causes daz to always drop to CPU until restart).

  • jukingeojukingeo Posts: 711
    edited January 2020

    I already tried to help you. You ignored the help from me and many others.

    What you and most others (but not all) were trying to do was to talk me into keeping the 2070 of which I was not satisfied with at the current price point.   For me the 1080ti offered a better deal.   I don't care if the 2070 looks better on paper, for the cost of $540, it wasn't cutting it for me, so that is why I sent it back.  It could have been that I got a bad 2070 as I have been reading of late issues with the RTX line.  As such another good reason to have returned it.   I am good with the 1080ti.   Still not ideal, but the more memory and lower purchase price suited me more.   While I appreciate any help given, I didn't want to be continually reminded of how the 2070 performs on paper.  In a nutshell I wasn't happy with the price to performance ratio and I sent it back.  Done deal.

    Post edited by jukingeo on
  • jukingeojukingeo Posts: 711
    Toonces said:

    Geometry memory consumption: 1.28015 GiB (device 0), 0 B (host)
    2020-01-12 14:46:54.875 Iray [INFO] - IRAY:RENDER ::   1.3   IRAY   rend info : Importing lights for motion time 0
    2020-01-12 14:46:54.922 Iray [VERBOSE] - IRAY:RENDER ::   1.3   IRAY   rend stat : Texture memory consumption: 4.28777 GiB for 394 bitmaps (device 0)

    Those two numbers in red seem like the culprit if it's dropping to CPU with a 1080TI. Is it less than 11 gig? Certainly, but other factors are involved. For example, in Render Settings > Advanced, you could have Texture Compression set too high which can *quickly* eat GPU memory.

    As others have suggested, Scene Optimizer cuts textures in half (which usually has no effect on render quality unless doing an extreme close-up). It's an inexpensive product to get the most out of your GPU (by ensuring doesn't drop to CPU).

    I also use the Iray Memory Assistant product since it lets me spot Scene elements with heavy textures or too-high subdivision. It isn't a great product for estimating how much of memory you'll use on the GPU (nothing accomplishes that unfortunately), but it gives ya some easy targets for making your scene fit.

    Just remember you have to restart Daz between each attempt (since dropping to CPU once causes daz to always drop to CPU until restart).

    So which do you find is a better help?  Scene optimized or Iray Memory Assistant?

    Yes I have noticed on a failed run that I have to use the Task Manager to stop Daz and restart it otherwise the memory doesn't clear out and it right away goes to CPU.

    Thanks

  • TooncesToonces Posts: 919

    Scene optimizer. And you really just need to cut the size of the texture maps in half. It has the biggest impact and is easy to do without learning all the other scene optimizer options.

  • JD_MortalJD_Mortal Posts: 760
    edited January 2020

    I don't know if that scene uses them, but if it has "Instances", you can get a little bit of memory savings by selecting, "Optimize for memory", as opposed to "Optimize for speed".

    Your desires, like mine, suggest that you NEED a 12GB to 24GB VRAM card. Unless you want to learn some external processing to speed-up the renders or create more elaborate renders.

    There was actually a video I saw, but it was blender related, but not specific to blender. It showed how they used something like a 2,000 polygon scene to actually render the fight scenes from "The Matrix", in the subway. Another video showed how they used similar tricks to create a whole city-scape from only a similar number of polygons, which seemed like all the buildings were unique.

    [youtube]

    One flaw with our thinking, in the advent of programs like Daz3D and pre-made objects, is to attempt to try to use them all to make complex scenes, without any realistic consideration for actual necessity of the scene, view or our cards abilities.

    For example, I can tell, just by looking at the render, and being familiar with the scene you used. (Aztec temple)... That 75% of the material images are not even being used. That was not an IRAY setup. It was a 3DeLight setup. Many of the materials are still going to be loaded, but the settings that are used in the materials, after conversion, do nothing, or they are setup so generic, that they have no noticeable variance in the output.

    Sadly, I could previously load-up 6 Gen-3 models, fully clothed, in a scene packed with 30+ items, and render it fine. However, now, with whatever they changed since 4.10 to 4.12... I am lucky if the scene is done loading after 15 minutes, for a 2-min render, which can only be done once, before I have to restart Daz, so it can render a second time. I cringe to imagine trying to load-up a single Gen-8 model with clothing. (Takes about the same amount of time. Even though there is honestly nothing special about the Gen-8, over Gen-3, oddly.) This is with my 2x 12GB Titan-Xp and 2x 12GB Titan-V cards. (None have RTX, but two have the same Volta/Tensor core hardware as the cards that have RTX.)

    Post edited by JD_Mortal on
  • jukingeojukingeo Posts: 711
    JD_Mortal said:

    I don't know if that scene uses them, but if it has "Instances", you can get a little bit of memory savings by selecting, "Optimize for memory", as opposed to "Optimize for speed".

    Your desires, like mine, suggest that you NEED a 12GB to 24GB VRAM card. Unless you want to learn some external processing to speed-up the renders or create more elaborate renders.

    There was actually a video I saw, but it was blender related, but not specific to blender. It showed how they used something like a 2,000 polygon scene to actually render the fight scenes from "The Matrix", in the subway. Another video showed how they used similar tricks to create a whole city-scape from only a similar number of polygons, which seemed like all the buildings were unique.

    [youtube]

    One flaw with our thinking, in the advent of programs like Daz3D and pre-made objects, is to attempt to try to use them all to make complex scenes, without any realistic consideration for actual necessity of the scene, view or our cards abilities.

    For example, I can tell, just by looking at the render, and being familiar with the scene you used. (Aztec temple)... That 75% of the material images are not even being used. That was not an IRAY setup. It was a 3DeLight setup. Many of the materials are still going to be loaded, but the settings that are used in the materials, after conversion, do nothing, or they are setup so generic, that they have no noticeable variance in the output.

    Sadly, I could previously load-up 6 Gen-3 models, fully clothed, in a scene packed with 30+ items, and render it fine. However, now, with whatever they changed since 4.10 to 4.12... I am lucky if the scene is done loading after 15 minutes, for a 2-min render, which can only be done once, before I have to restart Daz, so it can render a second time. I cringe to imagine trying to load-up a single Gen-8 model with clothing. (Takes about the same amount of time. Even though there is honestly nothing special about the Gen-8, over Gen-3, oddly.) This is with my 2x 12GB Titan-Xp and 2x 12GB Titan-V cards. (None have RTX, but two have the same Volta/Tensor core hardware as the cards that have RTX.)

    The scene with the Aztec Pyramid worked fine with the GPU.   It was the indoor Christmas scene with the 3 Gen 8 characters that is the difficult one.   With the 1080ti, I did manage to get the outdoor Christmas render to work.   I do have to get the scene optimized, but I am waiting for a sale in it.  But with some playing around, I can get quite a few of my scenes to render now.

  • This is surprising. I bought an RTX 2070 Super about a month and a half ago and haven't had any issues. None of my renders have dropped to CPU.

  • jukingeojukingeo Posts: 711

    This is surprising. I bought an RTX 2070 Super about a month and a half ago and haven't had any issues. None of my renders have dropped to CPU.

    I h ave pretty large and involved renders at 1800 pixels across and using multiple Gen -8 characters with many items in a scene.   So that pretty much was choking the 2070.   I needed something with more memory that wouldn't break the bank.   So the 1080ti became the logical choice.   I still run out of memory from time to time, but I am learning some optimization techniques and it is working out.   It is possible I could have had a bad 2070.  But I just wasn't happy with it.

  • Based on what I saw in your logs I would have said it was a lack of power.

    I had something similar occurring on one of my 980s a while back. I reseated the power lead and everything went back to normal. It's a long shot I know, and a moot point now.

  • I am using GeForce RTX 2070 graphics card type with no issues. It sounds like you have a drivers issue there.

    you should make sure you remove all instances of the old drivers first, by using this command:
     

    SET DEVMGR_SHOW_NONPRESENT_DEVICES=1

    devmgmt.msc

    once device manager shows up on screen, select show hidden devices.

    you should see some faded videocard drivers, probably from whatever video card you had previously installed, in a transparent icon. Right click and delete them.

    also, for the new RTX 2070, you should download and use. The NVIDIA Studio Drivers, instead of the default gaming drivers.

    I hope this helps.

Sign In or Register to comment.