Daz Studio Pro BETA - version 4.12.2.60! (*UPDATED*)

1353638404174

Comments

  • RayDAntRayDAnt Posts: 1,135
    edited January 2020
    Elysich said:

    Does the new Beta function for NvLink also work for 2x 2080Ti's? Or just for Titan/Quadros? 

    In theory it should work for ANY combination of RTX or Titan/Quadro (including pre-RTX model) cards. Although there may be an added stipulation that the cards must be switched into TCC mode - which would exclude all RTX cards not of a Titan/Quadro skew. The only way to know for sure is for those with capable hardware to test things out.

    Post edited by RayDAnt on
  • galiengalien Posts: 137

    I tried the new beta (4.12.1.55) this morning and have found that the changes made do not resolve the issue with IRay preview falling back to CPU in cases where it should not be failing; i.e. in scenes that use less that half the GPU memory that is available (I have a GTX1070).

    Disabling CPU fallback just means that when the GPU render fails, the preview window goes back to using OpenGL.  As with the previous beta, I need to save the scene and restart DS in order to render the scene.

    In addition to the 4.12 beta, I have 4.11 installed.  Using version 4.11, I have no issues with IRay preview.

     

  • galien said:

    I tried the new beta (4.12.1.55) this morning and have found that the changes made do not resolve the issue with IRay preview falling back to CPU in cases where it should not be failing; i.e. in scenes that use less that half the GPU memory that is available (I have a GTX1070).

    Disabling CPU fallback just means that when the GPU render fails, the preview window goes back to using OpenGL.  As with the previous beta, I need to save the scene and restart DS in order to render the scene.

    In addition to the 4.12 beta, I have 4.11 installed.  Using version 4.11, I have no issues with IRay preview.

    There have been no changes wit respect to preview, the chnage to ordering of operations was designed to help with doing sequential render in particular. You need to keep an eye on the change log/release notes for changes targeting preview or a new version of Iray.

    I have had a drop to CPU a couple of times recently myself, both times just after addng a geoemtry shell (for a mak up layer). I suspect, on a very limited sample of is happening but quite a lot of samples of its not happening, that changing textures is (somewhat) safe but changing geometry is riskier, so it may be worth trying to use Texture Shaded where you are posing/shaping/loading models and keep the Iray preview for purely material (and I'd guess lighting) chnages. But as I said that's extrapolating from a fairly narrow sample.

  • Hi @Richard Haseltine - where can we find that Changelog for the new beta?  I just went to look for it but couldn't find it.  Thanks.

  • and the Highlights thread, which condenses the chnages into a series of blocks by subject, is a sticky thread here https://www.daz3d.com/forums/discussion/341001/daz-studio-pro-4-12-highlights#latest

  • edited January 2020

    I have had a drop to CPU a couple of times recently myself, both times just after addng a geoemtry shell (for a mak up layer). I suspect, on a very limited sample of is happening but quite a lot of samples of its not happening, that changing textures is (somewhat) safe but changing geometry is riskier, so it may be worth trying to use Texture Shaded where you are posing/shaping/loading models and keep the Iray preview for purely material (and I'd guess lighting) chnages. But as I said that's extrapolating from a fairly narrow sample.

    I can consistently trigger the CPU fallback if I hide/unhide geometry really quickly during iray preview. This does not happen in 4.12 non-beta. Was using 4.12 non-beta for this reason, but with the new update rendering is much faster (I have 2 Titan RTXs with NVlink, maybe that's why) and more stable. So I've modified my workflow to switch to texture shaded while hiding/showing various bits of geometry. I do see higher VRAM usage compared to non-beta, but have not had it fall back to CPU during a render. Maybe because all my scenes fit in VRAM anyway.

    The "disable CPU fallback" option seems useless to me: it seems to just disable iray altogether.

     

    Post edited by Chohole on
  • Hurdy3DHurdy3D Posts: 1,047

    I just tested a very heavy test scene in 4.12.1.55

    Using 2x RTX 2080ti with NVLINK

    NVLINK Peer Group Size is set to 2.

    SLI is enabled.

    Nvidia drivers: 441.66

    Scene includes

    https://www.daz3d.com/palm-road (all details visible)

    https://www.daz3d.com/satyr-with-dforce-hair-for-genesis-8-male

    https://www.daz3d.com/satyr-with-dforce-hair-for-genesis-8-female

    https://www.daz3d.com/the-big-beast-ogre-for-genesis-8-male

    https://www.daz3d.com/camezard-hd-original-creature

    https://www.daz3d.com/sina-hd--smile-hd-expression-for-genesis-8-female

    In 4.12.0 the scene fells back to CPU.

    If I remove both satyrs it renders in GPU mode (4.12.0).

    In 4.12.1.55 the studio just crash in both scenarios (with and without the satyrs).

     

     

  • and the Highlights thread, which condenses the chnages into a series of blocks by subject, is a sticky thread here https://www.daz3d.com/forums/discussion/341001/daz-studio-pro-4-12-highlights#latest

     

    Leana said:

    Thanks Leanna and Richard. Read through that and have bookmarked / saved links for future reference.

     

     

  • Hurdy3DHurdy3D Posts: 1,047

    Just updated to the newest drivers (441.87) and disabled SLI. No improvements. But I checked the logs:

     
    2020-01-21 18:48:09.296 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Emitter geometry import (1 light source with 2 triangles, 1 instance) took 0.000s
    2020-01-21 18:48:09.296 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Updating environment.
    2020-01-21 18:48:09.297 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Updating backplate.
    2020-01-21 18:48:09.297 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Updating lens.
    2020-01-21 18:48:09.301 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Updating lights.
    2020-01-21 18:48:09.301 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Updating object flags.
    2020-01-21 18:48:09.301 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Updating caustic portals.
    2020-01-21 18:48:09.301 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Updating decals.
    2020-01-21 18:48:09.315 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Allocating 1-layer frame buffer
    2020-01-21 18:48:09.325 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Using batch scheduling, caustic sampler disabled
    2020-01-21 18:48:09.325 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Initializing local rendering.
    2020-01-21 18:48:09.358 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Rendering with 2 device(s):
    2020-01-21 18:48:09.358 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info :     CUDA device 0 (GeForce RTX 2080 Ti)
    2020-01-21 18:48:09.358 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info :     CUDA device 1 (GeForce RTX 2080 Ti)
    2020-01-21 18:48:09.362 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Rendering...
    2020-01-21 18:48:09.363 Iray [VERBOSE] - IRAY:RENDER ::   1.8   IRAY   rend progr: CUDA device 0 (GeForce RTX 2080 Ti): Processing scene...
    2020-01-21 18:48:09.364 Iray [VERBOSE] - IRAY:RENDER ::   1.13  IRAY   rend progr: CUDA device 1 (GeForce RTX 2080 Ti): Processing scene...
    2020-01-21 18:48:10.244 Iray [VERBOSE] - IRAY:RENDER ::   1.10  IRAY   rend stat : Geometry memory consumption: 3.542 GiB (device 0), 0.000 B (host)
    2020-01-21 18:48:10.260 Iray [INFO] - IRAY:RENDER ::   1.10  IRAY   rend info : Using OptiX version 7.0.0
    2020-01-21 18:48:10.283 Iray [VERBOSE] - IRAY:RENDER ::   1.16  IRAY   rend stat : Geometry memory consumption: 3.542 GiB (device 1), 0.000 B (host)
    2020-01-21 18:48:10.308 Iray [INFO] - IRAY:RENDER ::   1.10  IRAY   rend info : Initializing OptiX for CUDA device 0
    2020-01-21 18:48:10.308 Iray [INFO] - IRAY:RENDER ::   1.16  IRAY   rend info : Initializing OptiX for CUDA device 1
    2020-01-21 18:48:10.675 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(332): Iray [ERROR] - IRAY:RENDER ::   1.10  IRAY   rend error: Unable to allocate 2972422912 bytes from 5797518540 bytes of available device memory
    2020-01-21 18:48:10.679 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(332): Iray [ERROR] - IRAY:RENDER ::   1.16  IRAY   rend error: Unable to allocate 2972422912 bytes from 5943420518 bytes of available device memory
    2020-01-21 18:48:10.679 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(332): Iray [ERROR] - IRAY:RENDER ::   1.8   IRAY   rend error: CUDA device 0 (GeForce RTX 2080 Ti): Scene setup failed
    2020-01-21 18:48:10.679 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(332): Iray [ERROR] - IRAY:RENDER ::   1.13  IRAY   rend error: CUDA device 1 (GeForce RTX 2080 Ti): Scene setup failed
    2020-01-21 18:48:10.683 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(332): Iray [ERROR] - IRAY:RENDER ::   1.8   IRAY   rend error: CUDA device 0 (GeForce RTX 2080 Ti): Device failed while rendering
    2020-01-21 18:48:10.683 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(332): Iray [ERROR] - IRAY:RENDER ::   1.13  IRAY   rend error: CUDA device 1 (GeForce RTX 2080 Ti): Device failed while rendering
    2020-01-21 18:48:10.683 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(332): Iray [WARNING] - IRAY:RENDER ::   1.13  IRAY   rend warn : All available GPUs failed.
    2020-01-21 18:48:10.686 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(332): Iray [WARNING] - IRAY:RENDER ::   1.13  IRAY   rend warn : No devices activated. Enabling CPU fallback.
    2020-01-21 18:48:10.686 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(332): Iray [ERROR] - IRAY:RENDER ::   1.13  IRAY   rend error: All workers failed: aborting render
    2020-01-21 18:48:10.687 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CPU: using 16 cores for rendering
    2020-01-21 18:48:10.687 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Rendering with 1 device(s):
    2020-01-21 18:48:10.690 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info :     CPU
    2020-01-21 18:48:10.690 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Rendering...
    2020-01-21 18:48:10.690 Iray [INFO] - IRAY:RENDER ::   1.13  IRAY   rend info : Using Embree 2.8.0
    2020-01-21 18:48:10.690 Iray [INFO] - IRAY:RENDER ::   1.13  IRAY   rend info : Initializing Embree
    2020-01-21 18:48:10.690 Iray [VERBOSE] - IRAY:RENDER ::   1.8   IRAY   rend progr: CPU: Processing scene...
    2020-01-21 18:48:31.514 Iray [INFO] - IRAY:RENDER ::   1.13  IRAY   rend info : Importing lights for motion time 0
  • Hurdy3DHurdy3D Posts: 1,047
    edited January 2020

    Why does it it there's only 5943420518 (5.9GB) vram availibe?

    According to GPU-Z the complete memory is free.

    This error (with 5.9GB free memory) occurs  even in 4.12.0 and still if I load the test scene without the viewport.

    I would expecte that there are 8 to 9 GB availibe for each card. o.O (2080ti with 11GB)

    Post edited by Hurdy3D on
  • VEGAVEGA Posts: 86
    gerster said:

    Why does it it there's only 5943420518 (5.9GB) vram availibe?

    According to GPU-Z the complete memory is free.

    This error (with 5.9GB free memory) occurs  even in 4.12.0 and still if I load the test scene without the viewport.

    I would expecte that there are 8 to 9 GB availibe for each card. o.O (2080ti with 11GB)

    Keep in mind that it's not exactly 2x11GB. There is a hardware reserverd space and the code required for rendering needs to be present on both cards plus some scene elements. If it worked, VRAM available would be more like 19-20 at best, instead of 22GB. I noticed this "RAY   rend stat : Geometry memory consumption: 3.542 GiB (device 1), 0.000 B (host)" 3.54GiB of geometry is quite a lot. One Genesis 8 HD figure with RenderSubD set to 5 which is almost "1GiB" of geometry can take roughly 7GB of VRAM.

  • Hurdy3DHurdy3D Posts: 1,047
    VEGA said:
    gerster said:

    Why does it it there's only 5943420518 (5.9GB) vram availibe?

    According to GPU-Z the complete memory is free.

    This error (with 5.9GB free memory) occurs  even in 4.12.0 and still if I load the test scene without the viewport.

    I would expecte that there are 8 to 9 GB availibe for each card. o.O (2080ti with 11GB)

    Keep in mind that it's not exactly 2x11GB. There is a hardware reserverd space and the code required for rendering needs to be present on both cards plus some scene elements. If it worked, VRAM available would be more like 19-20 at best, instead of 22GB. I noticed this "RAY   rend stat : Geometry memory consumption: 3.542 GiB (device 1), 0.000 B (host)" 3.54GiB of geometry is quite a lot. One Genesis 8 HD figure with RenderSubD set to 5 which is almost "1GiB" of geometry can take roughly 7GB of VRAM.

    I'm totally aware of that I don't get of my 11GB VRAM the full memory. But I'm really irritated that I only get 5.9GB. That's something I would expect from a 8GB VRAM graphic card.

  • VEGAVEGA Posts: 86
    gerster said:
    VEGA said:
    gerster said:

    Why does it it there's only 5943420518 (5.9GB) vram availibe?

    According to GPU-Z the complete memory is free.

    This error (with 5.9GB free memory) occurs  even in 4.12.0 and still if I load the test scene without the viewport.

    I would expecte that there are 8 to 9 GB availibe for each card. o.O (2080ti with 11GB)

    Keep in mind that it's not exactly 2x11GB. There is a hardware reserverd space and the code required for rendering needs to be present on both cards plus some scene elements. If it worked, VRAM available would be more like 19-20 at best, instead of 22GB. I noticed this "RAY   rend stat : Geometry memory consumption: 3.542 GiB (device 1), 0.000 B (host)" 3.54GiB of geometry is quite a lot. One Genesis 8 HD figure with RenderSubD set to 5 which is almost "1GiB" of geometry can take roughly 7GB of VRAM.

    I'm totally aware of that I don't get of my 11GB VRAM the full memory. But I'm really irritated that I only get 5.9GB. That's something I would expect from a 8GB VRAM graphic card.

    This is taken from this chaosgroup page talking about NVLink profiling https://www.chaosgroup.com/blog/profiling-the-nvidia-rtx-cards :

    For NVLink to work on Windows, GeForce RTX cards must be put in SLI mode from the NVIDIA control panel (this is not required for Quadro RTX cards, nor is it needed on Linux, and it’s not recommended for older GPUs). If the SLI mode is disabled, NVLink will not be active. This means that the motherboard must support SLI, otherwise you will not be able to use NVLink with GeForce cards. Also note that in an SLI group, only monitors connected to the primary GPU will work.

    They don't state which driver they use though.

  • RayDAntRayDAnt Posts: 1,135
    edited January 2020
    gerster said:
    VEGA said:
    gerster said:

    Why does it it there's only 5943420518 (5.9GB) vram availibe?

    According to GPU-Z the complete memory is free.

    This error (with 5.9GB free memory) occurs  even in 4.12.0 and still if I load the test scene without the viewport.

    I would expecte that there are 8 to 9 GB availibe for each card. o.O (2080ti with 11GB)

    Keep in mind that it's not exactly 2x11GB. There is a hardware reserverd space and the code required for rendering needs to be present on both cards plus some scene elements. If it worked, VRAM available would be more like 19-20 at best, instead of 22GB. I noticed this "RAY   rend stat : Geometry memory consumption: 3.542 GiB (device 1), 0.000 B (host)" 3.54GiB of geometry is quite a lot. One Genesis 8 HD figure with RenderSubD set to 5 which is almost "1GiB" of geometry can take roughly 7GB of VRAM.

    I'm totally aware of that I don't get of my 11GB VRAM the full memory. But I'm really irritated that I only get 5.9GB. That's something I would expect from a 8GB VRAM graphic card.

    Out of curiosity, if you go back to your log file and search for the following line:

    2020-01-21 14:32:29.130 Iray [INFO] - IRAY:RENDER ::   1.1   IRAY   rend info : CUDA device 0 (TITAN RTX): compute capability 7.5, 24.000 GiB total, 20.069 GiB available, display attached

    What number do you see for the statistic in blue above?

    Post edited by RayDAnt on
  • Hurdy3DHurdy3D Posts: 1,047
    VEGA said:
    gerster said:
    VEGA said:
    gerster said:

    Why does it it there's only 5943420518 (5.9GB) vram availibe?

    According to GPU-Z the complete memory is free.

    This error (with 5.9GB free memory) occurs  even in 4.12.0 and still if I load the test scene without the viewport.

    I would expecte that there are 8 to 9 GB availibe for each card. o.O (2080ti with 11GB)

    Keep in mind that it's not exactly 2x11GB. There is a hardware reserverd space and the code required for rendering needs to be present on both cards plus some scene elements. If it worked, VRAM available would be more like 19-20 at best, instead of 22GB. I noticed this "RAY   rend stat : Geometry memory consumption: 3.542 GiB (device 1), 0.000 B (host)" 3.54GiB of geometry is quite a lot. One Genesis 8 HD figure with RenderSubD set to 5 which is almost "1GiB" of geometry can take roughly 7GB of VRAM.

    I'm totally aware of that I don't get of my 11GB VRAM the full memory. But I'm really irritated that I only get 5.9GB. That's something I would expect from a 8GB VRAM graphic card.

    This is taken from this chaosgroup page talking about NVLink profiling https://www.chaosgroup.com/blog/profiling-the-nvidia-rtx-cards :

    For NVLink to work on Windows, GeForce RTX cards must be put in SLI mode from the NVIDIA control panel (this is not required for Quadro RTX cards, nor is it needed on Linux, and it’s not recommended for older GPUs). If the SLI mode is disabled, NVLink will not be active. This means that the motherboard must support SLI, otherwise you will not be able to use NVLink with GeForce cards. Also note that in an SLI group, only monitors connected to the primary GPU will work.

    They don't state which driver they use though.

    That NVLink requires SLI needs to be enabled is a usefull info.

    However, but this wasn't my confusion. It's why because only 5.9GB of my VRAM are availibe according to the logs. I'm still reseaching, but I think the answer is, because of the hight geometry of that scene. I tested an other scene where definitly more than 7GB VRAM are used in total.

    More important is, that I figured out what makes DS 4.12.1.55 crash everytime I start my test render.

    Looks like it's that product (with all details on): https://www.daz3d.com/palm-road

    DS 4.12.0 renders Palm Road fine.

    Can anyone test if Palm Road crash on DS 4.12.1.55, too?

  • marblemarble Posts: 7,500
    edited January 2020

    Glad to see this (if it is what I think it is) ... I had asked for this in the Product Suggestions forum:

     Simulation - dForce

    • Added a "Simulate Selected" action
      • Performs a custom simulation with only the selected nodes (per their respective settings) considered
        • Select ALL nodes in the scene that you want to participate in the simulation, including any nodes that provide wind and/or serve as collision targets
      • Available from the Simulation Settings pane option menu
      • Underlying functionality of this action is not new (possible since 4.10 - see script smaple here), what is new is the convienence of a predefined action

     

     

     

    Post edited by Richard Haseltine on
  • I have had a drop to CPU a couple of times recently myself, both times just after addng a geoemtry shell (for a mak up layer). I suspect, on a very limited sample of is happening but quite a lot of samples of its not happening, that changing textures is (somewhat) safe but changing geometry is riskier, so it may be worth trying to use Texture Shaded where you are posing/shaping/loading models and keep the Iray preview for purely material (and I'd guess lighting) chnages. But as I said that's extrapolating from a fairly narrow sample.

    I can consistently trigger the CPU fallback if I hide/unhide geometry really quickly during iray preview. This does not happen in 4.12 non-beta. Was using 4.12 non-beta for this reason, but with the new update rendering is much faster (I have 2 Titan RTXs with NVlink, maybe that's why) and more stable. So I've modified my workflow to switch to texture shaded while hiding/showing various bits of geometry. I do see higher VRAM usage compared to non-beta, but have not had it fall back to CPU during a render. Maybe because all my scenes fit in VRAM anyway.

    Hide/unhide would be a geometry update

    The "disable CPU fallback" option seems useless to me: it seems to just disable iray altogether.

    I think the main demand has come from people who don't want the render to drop to CPU and tie up their system. It certainly isn't related to, still less a fix for, the issues that cause the GPU(s) to drop out of rendering.

  • L'AdairL'Adair Posts: 9,479
    nicstt said:

    Changing the Max Render Time (in seconds) restarts the render; I don't recall this happening previosly.

    I've had that happen with 4.12.0.85 once. As I was at something like 12K+ I was not happy about it. (My default is 15K, and I was fine with the image, so I wanted to stop it at a predetermined value, like 12,500 or such, so I had the exact value for any spot renders.)

    I also noticed that in previous release versions, changing the Tone Mapping had stopped causing the render to start over, but in the 4.12 betas, it does start over. It would have been frustrating except it appears to retain some of the information and so it seems to take only "a few" iterations before it's back to where it was before tweaking. But what a heart-stopper it was the first time that happened. I hope the by 4.12 release it will just continue from the same point, but that may be a function of Iray, not something DS controls.


    So far, I've avoided updating the beta any further due to bugs mentioned in this thread, and the fact I don't do animations. So far, my only use for the Timeline is for dForce. However, I noticed that build number 4.12.1.50 indicates an update for dForce, to 1.2.1.5, so it may be time to update…

  • marblemarble Posts: 7,500
    edited January 2020
    marble said:

    Glad to see this (if it is what I think it is) ... I had asked for this in the Product Suggestions forum:

     Simulation - dForce

    • Added a "Simulate Selected" action
      • Performs a custom simulation with only the selected nodes (per their respective settings) considered
        • Select ALL nodes in the scene that you want to participate in the simulation, including any nodes that provide wind and/or serve as collision targets
      • Available from the Simulation Settings pane option menu
      • Underlying functionality of this action is not new (possible since 4.10 - see script smaple here), what is new is the convienence of a predefined action

     

     

     

    Just tried the Simulate Selected in this new Beta ... sadly it doesn't work. Depending upon which dForce garment is loaded for G8F I've had a dress bunched into a heap which completely falls off the G8 figure or one which stays on the figure but half of the cloth ends up inside G8. How they can release things like this and the first time I test it I find the bugs is beyond me.

    By the way, using the old method (just click the blue "Simulate" button) works fine, as it always has, but this is one result I got using the Simulate Selected option:

    sim_selected.png
    682 x 1037 - 207K
    Post edited by Richard Haseltine on
  • marble said:
    How they can release things like this and the first time I test it I find the bugs is beyond me.

    It's called a Beta release.

  • marble said:
    marble said:

    Glad to see this (if it is what I think it is) ... I had asked for this in the Product Suggestions forum:

     Simulation - dForce

    • Added a "Simulate Selected" action
      • Performs a custom simulation with only the selected nodes (per their respective settings) considered
        • Select ALL nodes in the scene that you want to participate in the simulation, including any nodes that provide wind and/or serve as collision targets
      • Available from the Simulation Settings pane option menu
      • Underlying functionality of this action is not new (possible since 4.10 - see script smaple here), what is new is the convienence of a predefined action

     

     

     

    Just tried the Simulate Selected in this new Beta ... sadly it doesn't work. Depending upon which dForce garment is loaded for G8F I've had a dress bunched into a heap which completely falls off the G8 figure or one which stays on the figure but half of the cloth ends up inside G8. How they can release things like this and the first time I test it I find the bugs is beyond me.

    By the way, using the old method (just click the blue "Simulate" button) works fine, as it always has, but this is one result I got using the Simulate Selected option:

    Are you selecting both the item you want to simulate and the things you want it to collide with? I missed the second part when I first tried it, too.

  • ImagoImago Posts: 5,155
    VEGA said:
    gerster said:
    VEGA said:
    gerster said:

    Why does it it there's only 5943420518 (5.9GB) vram availibe?

    According to GPU-Z the complete memory is free.

    This error (with 5.9GB free memory) occurs  even in 4.12.0 and still if I load the test scene without the viewport.

    I would expecte that there are 8 to 9 GB availibe for each card. o.O (2080ti with 11GB)

    Keep in mind that it's not exactly 2x11GB. There is a hardware reserverd space and the code required for rendering needs to be present on both cards plus some scene elements. If it worked, VRAM available would be more like 19-20 at best, instead of 22GB. I noticed this "RAY   rend stat : Geometry memory consumption: 3.542 GiB (device 1), 0.000 B (host)" 3.54GiB of geometry is quite a lot. One Genesis 8 HD figure with RenderSubD set to 5 which is almost "1GiB" of geometry can take roughly 7GB of VRAM.

    I'm totally aware of that I don't get of my 11GB VRAM the full memory. But I'm really irritated that I only get 5.9GB. That's something I would expect from a 8GB VRAM graphic card.

    This is taken from this chaosgroup page talking about NVLink profiling https://www.chaosgroup.com/blog/profiling-the-nvidia-rtx-cards :

    For NVLink to work on Windows, GeForce RTX cards must be put in SLI mode from the NVIDIA control panel (this is not required for Quadro RTX cards, nor is it needed on Linux, and it’s not recommended for older GPUs). If the SLI mode is disabled, NVLink will not be active. This means that the motherboard must support SLI, otherwise you will not be able to use NVLink with GeForce cards. Also note that in an SLI group, only monitors connected to the primary GPU will work.

    They don't state which driver they use though.

    As far as I know, NVLink works completely ( VRAM pooling, 120gb/s operations ) only on RTX Quadro cards, all the other RTX cards works only in SLI mode, so no VRAM pooling and only 10gb/s operations,  just CUDA pooling.

    Am I wrong?

  • VEGAVEGA Posts: 86
    Imago said:
    VEGA said:
    gerster said:
    VEGA said:
    gerster said:

    Why does it it there's only 5943420518 (5.9GB) vram availibe?

    According to GPU-Z the complete memory is free.

    This error (with 5.9GB free memory) occurs  even in 4.12.0 and still if I load the test scene without the viewport.

    I would expecte that there are 8 to 9 GB availibe for each card. o.O (2080ti with 11GB)

    Keep in mind that it's not exactly 2x11GB. There is a hardware reserverd space and the code required for rendering needs to be present on both cards plus some scene elements. If it worked, VRAM available would be more like 19-20 at best, instead of 22GB. I noticed this "RAY   rend stat : Geometry memory consumption: 3.542 GiB (device 1), 0.000 B (host)" 3.54GiB of geometry is quite a lot. One Genesis 8 HD figure with RenderSubD set to 5 which is almost "1GiB" of geometry can take roughly 7GB of VRAM.

    I'm totally aware of that I don't get of my 11GB VRAM the full memory. But I'm really irritated that I only get 5.9GB. That's something I would expect from a 8GB VRAM graphic card.

    This is taken from this chaosgroup page talking about NVLink profiling https://www.chaosgroup.com/blog/profiling-the-nvidia-rtx-cards :

    For NVLink to work on Windows, GeForce RTX cards must be put in SLI mode from the NVIDIA control panel (this is not required for Quadro RTX cards, nor is it needed on Linux, and it’s not recommended for older GPUs). If the SLI mode is disabled, NVLink will not be active. This means that the motherboard must support SLI, otherwise you will not be able to use NVLink with GeForce cards. Also note that in an SLI group, only monitors connected to the primary GPU will work.

    They don't state which driver they use though.

    As far as I know, NVLink works completely ( VRAM pooling, 120gb/s operations ) only on RTX Quadro cards, all the other RTX cards works only in SLI mode, so no VRAM pooling and only 10gb/s operations,  just CUDA pooling.

    Am I wrong?

    On Linux the memory pooling or at least conection works, and apparently it should work on Windows too, if software supports it. 2080ti has 2way (100gb/s), quadro cards should have 6way (300gb/s). Look at this page to get your answers https://www.pugetsystems.com/labs/articles/NVLink-on-NVIDIA-GeForce-RTX-2080-2080-Ti-in-Windows-10-1253/

  • Richard Haseltine said:

    I think the main demand has come from people who don't want the render to drop to CPU and tie up their system. It certainly isn't related to, still less a fix for, the issues that cause the GPU(s) to drop out of rendering.

    Makes sense. I actually found myself using it. Better to know that the GPU failed than wonder why everything has slowed to a crawl. If only to remind me to restart.

    VEGA said:

    For NVLink to work on Windows, GeForce RTX cards must be put in SLI mode from the NVIDIA control panel (this is not required for Quadro RTX cards, nor is it needed on Linux, and it’s not recommended for older GPUs). If the SLI mode is disabled, NVLink will not be active. This means that the motherboard must support SLI, otherwise you will not be able to use NVLink with GeForce cards. Also note that in an SLI group, only monitors connected to the primary GPU will work.

    They don't state which driver they use though.

    I did some testing with a scene on 2xTitan RTX (+ NVLink) and 441.87 drivers with the latest 4.12 DAZ public beta. DAZ pool size is set to 2.

    The scene used ~8.5GB VRAM on my single GTX 1080ti with DAZ 4.12 non-beta.

    With SLI disabled, one GPU uses 9.5GB VRAM and the other 8.5GB. Both GPUs are used ~100% in the CUDA category while rendering, but the first GPU is also using 90-100% in the "copy" category (as seen in Windows Task Manager).

    With SLI enabled, both GPUs always see identical VRAM usage at all times and both use 8.5GB for the scene. Both GPUs are used 100% CUDA while rendering, but after some initial activity the copy category on the first GPU dropped to 0%. The render finished 40% faster than the non-SLI case (and the only difference was the NVidia control panel SLI setting).

    The above behavior is identical even for the 4.12 non-beta version of DAZ (including the 40% faster render time), however the identical VRAM usage is 9.5GB per GPU.

    So the SLI setting is definitely doing something on the Titan, which likely applies to the 2080ti too.

    I have no idea how to check for memory pooling. I might need a scene larger than 24GB. Only thing I can tell is that non-beta DAZ uses 1GB more in the same setting.

  • RayDAntRayDAnt Posts: 1,135
    edited January 2020
    VEGA said:

    On Linux the memory pooling or at least conection works, and apparently it should work on Windows too, if software supports it. 2080ti has 2way (100gb/s), quadro cards should have 6way (300gb/s). Look at this page to get your answers https://www.pugetsystems.com/labs/articles/NVLink-on-NVIDIA-GeForce-RTX-2080-2080-Ti-in-Windows-10-1253/

    A couple updates/addendums to the info in that Puget Systems article:

    All Quadro RTX cards share exactly the same physical NVLink implementation as their equivalent GeForce/Titan RTX counterparts (since they all use exactly the same GPU dies, and NVLink is physically integrated into the GPU die itself on the Turing microarchitecture.) Meaning - interestingly - that the max NVLink bandwidth available on current generation Quadro cards is just 2-way 100GT/s (ie. 300GT/s was not carried over from Volta to the Turing spec.) In terms of potential NVLink capabilities on non-Quadro RTX cards this is a good thing because it means the only thing that differentiates the two product lines is software implementation - not hardware functionality.

    TCC mode (what Puget Systems found to be the secret for getting memory pooling to work on GP100/GV100 cards under Windows) is supported on all Titan series cards as well as all Quadros. Meaning that anyone looking to take full advantage of NVLink capabilites on a pair of Titan RTX or Quadro cards should switch them into TCC mode (see this post for instructions on how to do this) in addition to linking them with an NVLink bridge to get memory pooling in Iray to function. Meaning that the only cards for whom SLI should be relevant in getting them to work are GefForce RTX cards.

    Post edited by RayDAnt on
  • edited January 2020

    RayDAnt said:

    TCC mode (what Puget Systems found to be the secret for getting memory pooling to work on GP100/GV100 cards under Windows) is supported on all Titan series cards as well as all Quadros. Meaning that anyone looking to take full advantage of NVLink capabilites on a pair of Titan RTX or Quadro cards should switch them into TCC mode (see this post for instructions on how to do this) in addition to linking them with an NVLink bridge to get memory pooling in Iray to function. Meaning that the only cards for whom SLI should be relevant in getting them to work are GefForce RTX cards.

    Switched the Titans to TCC mode to try again (my prior post had them in WDDM mode and driving the displays DAZ was running on). I also had to set my 3rd non-CUDA graphics card as primary in BIOS for the TCC mode to stick.

    Sadly, I could not use the same tools to measure VRAM since the usual suspects (GPU-Z, Afterburner) all showed full 100% VRAM usage. Only nvidia-smi.exe showed a relevant number, which indicated 7GB VRAM for one GPU and 5GB VRAM for the other, for the same scene in my previous post. Which is lower. Render times were the same as WDDM mode with SLI enabled.

    Still not sure if pooling is active and whether TCC mode is required for the Titan. I only have one 2080ti so I can't confirm if things are different for it (and all my Quadros are pre-NVlink).

    Post edited by grimulkan_9cfbd329bc on
  • marblemarble Posts: 7,500
    marble said:
    marble said:

    Glad to see this (if it is what I think it is) ... I had asked for this in the Product Suggestions forum:

     Simulation - dForce

    • Added a "Simulate Selected" action
      • Performs a custom simulation with only the selected nodes (per their respective settings) considered
        • Select ALL nodes in the scene that you want to participate in the simulation, including any nodes that provide wind and/or serve as collision targets
      • Available from the Simulation Settings pane option menu
      • Underlying functionality of this action is not new (possible since 4.10 - see script smaple here), what is new is the convienence of a predefined action

     

     

     

    Just tried the Simulate Selected in this new Beta ... sadly it doesn't work. Depending upon which dForce garment is loaded for G8F I've had a dress bunched into a heap which completely falls off the G8 figure or one which stays on the figure but half of the cloth ends up inside G8. How they can release things like this and the first time I test it I find the bugs is beyond me.

    By the way, using the old method (just click the blue "Simulate" button) works fine, as it always has, but this is one result I got using the Simulate Selected option:

    Are you selecting both the item you want to simulate and the things you want it to collide with? I missed the second part when I first tried it, too.

    Right enough, it does stress "ALL the nodes" but I didn't read that to mean the collision objects too. I guess it makes sense even though it isn't quite intuitive after using dForce in the original manner.

    Thanks for putting me right.

Sign In or Register to comment.