Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
In theory it should work for ANY combination of RTX or Titan/Quadro (including pre-RTX model) cards. Although there may be an added stipulation that the cards must be switched into TCC mode - which would exclude all RTX cards not of a Titan/Quadro skew. The only way to know for sure is for those with capable hardware to test things out.
https://en.wikipedia.org/wiki/NVLink
NVIDIA NVLink Bridge Compatibility Chart
I tried the new beta (4.12.1.55) this morning and have found that the changes made do not resolve the issue with IRay preview falling back to CPU in cases where it should not be failing; i.e. in scenes that use less that half the GPU memory that is available (I have a GTX1070).
Disabling CPU fallback just means that when the GPU render fails, the preview window goes back to using OpenGL. As with the previous beta, I need to save the scene and restart DS in order to render the scene.
In addition to the 4.12 beta, I have 4.11 installed. Using version 4.11, I have no issues with IRay preview.
There have been no changes wit respect to preview, the chnage to ordering of operations was designed to help with doing sequential render in particular. You need to keep an eye on the change log/release notes for changes targeting preview or a new version of Iray.
I have had a drop to CPU a couple of times recently myself, both times just after addng a geoemtry shell (for a mak up layer). I suspect, on a very limited sample of is happening but quite a lot of samples of its not happening, that changing textures is (somewhat) safe but changing geometry is riskier, so it may be worth trying to use Texture Shaded where you are posing/shaping/loading models and keep the Iray preview for purely material (and I'd guess lighting) chnages. But as I said that's extrapolating from a fairly narrow sample.
Hi @Richard Haseltine - where can we find that Changelog for the new beta? I just went to look for it but couldn't find it. Thanks.
Changelog is here: http://docs.daz3d.com/doku.php/public/software/dazstudio/4/change_log
and the Highlights thread, which condenses the chnages into a series of blocks by subject, is a sticky thread here https://www.daz3d.com/forums/discussion/341001/daz-studio-pro-4-12-highlights#latest
I can consistently trigger the CPU fallback if I hide/unhide geometry really quickly during iray preview. This does not happen in 4.12 non-beta. Was using 4.12 non-beta for this reason, but with the new update rendering is much faster (I have 2 Titan RTXs with NVlink, maybe that's why) and more stable. So I've modified my workflow to switch to texture shaded while hiding/showing various bits of geometry. I do see higher VRAM usage compared to non-beta, but have not had it fall back to CPU during a render. Maybe because all my scenes fit in VRAM anyway.
The "disable CPU fallback" option seems useless to me: it seems to just disable iray altogether.
I just tested a very heavy test scene in 4.12.1.55
Using 2x RTX 2080ti with NVLINK
NVLINK Peer Group Size is set to 2.
SLI is enabled.
Nvidia drivers: 441.66
Scene includes
https://www.daz3d.com/palm-road (all details visible)
https://www.daz3d.com/satyr-with-dforce-hair-for-genesis-8-male
https://www.daz3d.com/satyr-with-dforce-hair-for-genesis-8-female
https://www.daz3d.com/the-big-beast-ogre-for-genesis-8-male
https://www.daz3d.com/camezard-hd-original-creature
https://www.daz3d.com/sina-hd--smile-hd-expression-for-genesis-8-female
In 4.12.0 the scene fells back to CPU.
If I remove both satyrs it renders in GPU mode (4.12.0).
In 4.12.1.55 the studio just crash in both scenarios (with and without the satyrs).
Thanks Leanna and Richard. Read through that and have bookmarked / saved links for future reference.
Just updated to the newest drivers (441.87) and disabled SLI. No improvements. But I checked the logs:
2020-01-21 18:48:09.296 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : Updating environment.
2020-01-21 18:48:09.297 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : Updating backplate.
2020-01-21 18:48:09.297 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : Updating lens.
2020-01-21 18:48:09.301 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : Updating lights.
2020-01-21 18:48:09.301 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : Updating object flags.
2020-01-21 18:48:09.301 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : Updating caustic portals.
2020-01-21 18:48:09.301 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : Updating decals.
2020-01-21 18:48:09.315 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : Allocating 1-layer frame buffer
2020-01-21 18:48:09.325 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : Using batch scheduling, caustic sampler disabled
2020-01-21 18:48:09.325 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : Initializing local rendering.
2020-01-21 18:48:09.358 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : Rendering with 2 device(s):
2020-01-21 18:48:09.358 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : CUDA device 0 (GeForce RTX 2080 Ti)
2020-01-21 18:48:09.358 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : CUDA device 1 (GeForce RTX 2080 Ti)
2020-01-21 18:48:09.362 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : Rendering...
2020-01-21 18:48:09.363 Iray [VERBOSE] - IRAY:RENDER :: 1.8 IRAY rend progr: CUDA device 0 (GeForce RTX 2080 Ti): Processing scene...
2020-01-21 18:48:09.364 Iray [VERBOSE] - IRAY:RENDER :: 1.13 IRAY rend progr: CUDA device 1 (GeForce RTX 2080 Ti): Processing scene...
2020-01-21 18:48:10.244 Iray [VERBOSE] - IRAY:RENDER :: 1.10 IRAY rend stat : Geometry memory consumption: 3.542 GiB (device 0), 0.000 B (host)
2020-01-21 18:48:10.260 Iray [INFO] - IRAY:RENDER :: 1.10 IRAY rend info : Using OptiX version 7.0.0
2020-01-21 18:48:10.283 Iray [VERBOSE] - IRAY:RENDER :: 1.16 IRAY rend stat : Geometry memory consumption: 3.542 GiB (device 1), 0.000 B (host)
2020-01-21 18:48:10.308 Iray [INFO] - IRAY:RENDER :: 1.10 IRAY rend info : Initializing OptiX for CUDA device 0
2020-01-21 18:48:10.308 Iray [INFO] - IRAY:RENDER :: 1.16 IRAY rend info : Initializing OptiX for CUDA device 1
2020-01-21 18:48:10.675 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(332): Iray [ERROR] - IRAY:RENDER :: 1.10 IRAY rend error: Unable to allocate 2972422912 bytes from 5797518540 bytes of available device memory
2020-01-21 18:48:10.679 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(332): Iray [ERROR] - IRAY:RENDER :: 1.16 IRAY rend error: Unable to allocate 2972422912 bytes from 5943420518 bytes of available device memory
2020-01-21 18:48:10.679 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(332): Iray [ERROR] - IRAY:RENDER :: 1.8 IRAY rend error: CUDA device 0 (GeForce RTX 2080 Ti): Scene setup failed
2020-01-21 18:48:10.679 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(332): Iray [ERROR] - IRAY:RENDER :: 1.13 IRAY rend error: CUDA device 1 (GeForce RTX 2080 Ti): Scene setup failed
2020-01-21 18:48:10.683 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(332): Iray [ERROR] - IRAY:RENDER :: 1.8 IRAY rend error: CUDA device 0 (GeForce RTX 2080 Ti): Device failed while rendering
2020-01-21 18:48:10.683 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(332): Iray [ERROR] - IRAY:RENDER :: 1.13 IRAY rend error: CUDA device 1 (GeForce RTX 2080 Ti): Device failed while rendering
2020-01-21 18:48:10.683 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(332): Iray [WARNING] - IRAY:RENDER :: 1.13 IRAY rend warn : All available GPUs failed.
2020-01-21 18:48:10.686 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(332): Iray [WARNING] - IRAY:RENDER :: 1.13 IRAY rend warn : No devices activated. Enabling CPU fallback.
2020-01-21 18:48:10.686 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(332): Iray [ERROR] - IRAY:RENDER :: 1.13 IRAY rend error: All workers failed: aborting render
2020-01-21 18:48:10.687 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : CPU: using 16 cores for rendering
2020-01-21 18:48:10.687 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : Rendering with 1 device(s):
2020-01-21 18:48:10.690 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : CPU
2020-01-21 18:48:10.690 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : Rendering...
2020-01-21 18:48:10.690 Iray [INFO] - IRAY:RENDER :: 1.13 IRAY rend info : Using Embree 2.8.0
2020-01-21 18:48:10.690 Iray [INFO] - IRAY:RENDER :: 1.13 IRAY rend info : Initializing Embree
2020-01-21 18:48:10.690 Iray [VERBOSE] - IRAY:RENDER :: 1.8 IRAY rend progr: CPU: Processing scene...
2020-01-21 18:48:31.514 Iray [INFO] - IRAY:RENDER :: 1.13 IRAY rend info : Importing lights for motion time 0
Why does it it there's only 5943420518 (5.9GB) vram availibe?
According to GPU-Z the complete memory is free.
This error (with 5.9GB free memory) occurs even in 4.12.0 and still if I load the test scene without the viewport.
I would expecte that there are 8 to 9 GB availibe for each card. o.O (2080ti with 11GB)
Keep in mind that it's not exactly 2x11GB. There is a hardware reserverd space and the code required for rendering needs to be present on both cards plus some scene elements. If it worked, VRAM available would be more like 19-20 at best, instead of 22GB. I noticed this "RAY rend stat : Geometry memory consumption: 3.542 GiB (device 1), 0.000 B (host)" 3.54GiB of geometry is quite a lot. One Genesis 8 HD figure with RenderSubD set to 5 which is almost "1GiB" of geometry can take roughly 7GB of VRAM.
I'm totally aware of that I don't get of my 11GB VRAM the full memory. But I'm really irritated that I only get 5.9GB. That's something I would expect from a 8GB VRAM graphic card.
This is taken from this chaosgroup page talking about NVLink profiling https://www.chaosgroup.com/blog/profiling-the-nvidia-rtx-cards :
For NVLink to work on Windows, GeForce RTX cards must be put in SLI mode from the NVIDIA control panel (this is not required for Quadro RTX cards, nor is it needed on Linux, and it’s not recommended for older GPUs). If the SLI mode is disabled, NVLink will not be active. This means that the motherboard must support SLI, otherwise you will not be able to use NVLink with GeForce cards. Also note that in an SLI group, only monitors connected to the primary GPU will work.
They don't state which driver they use though.
Out of curiosity, if you go back to your log file and search for the following line:
What number do you see for the statistic in blue above?
That NVLink requires SLI needs to be enabled is a usefull info.
However, but this wasn't my confusion. It's why because only 5.9GB of my VRAM are availibe according to the logs. I'm still reseaching, but I think the answer is, because of the hight geometry of that scene. I tested an other scene where definitly more than 7GB VRAM are used in total.
More important is, that I figured out what makes DS 4.12.1.55 crash everytime I start my test render.
Looks like it's that product (with all details on): https://www.daz3d.com/palm-road
DS 4.12.0 renders Palm Road fine.
Can anyone test if Palm Road crash on DS 4.12.1.55, too?
Glad to see this (if it is what I think it is) ... I had asked for this in the Product Suggestions forum:
Hide/unhide would be a geometry update
I think the main demand has come from people who don't want the render to drop to CPU and tie up their system. It certainly isn't related to, still less a fix for, the issues that cause the GPU(s) to drop out of rendering.
I've had that happen with 4.12.0.85 once. As I was at something like 12K+ I was not happy about it. (My default is 15K, and I was fine with the image, so I wanted to stop it at a predetermined value, like 12,500 or such, so I had the exact value for any spot renders.)
I also noticed that in previous release versions, changing the Tone Mapping had stopped causing the render to start over, but in the 4.12 betas, it does start over. It would have been frustrating except it appears to retain some of the information and so it seems to take only "a few" iterations before it's back to where it was before tweaking. But what a heart-stopper it was the first time that happened. I hope the by 4.12 release it will just continue from the same point, but that may be a function of Iray, not something DS controls.
So far, I've avoided updating the beta any further due to bugs mentioned in this thread, and the fact I don't do animations. So far, my only use for the Timeline is for dForce. However, I noticed that build number 4.12.1.50 indicates an update for dForce, to 1.2.1.5, so it may be time to update…
Just tried the Simulate Selected in this new Beta ... sadly it doesn't work. Depending upon which dForce garment is loaded for G8F I've had a dress bunched into a heap which completely falls off the G8 figure or one which stays on the figure but half of the cloth ends up inside G8. How they can release things like this and the first time I test it I find the bugs is beyond me.
By the way, using the old method (just click the blue "Simulate" button) works fine, as it always has, but this is one result I got using the Simulate Selected option:
It's called a Beta release.
Are you selecting both the item you want to simulate and the things you want it to collide with? I missed the second part when I first tried it, too.
As far as I know, NVLink works completely ( VRAM pooling, 120gb/s operations ) only on RTX Quadro cards, all the other RTX cards works only in SLI mode, so no VRAM pooling and only 10gb/s operations, just CUDA pooling.
Am I wrong?
On Linux the memory pooling or at least conection works, and apparently it should work on Windows too, if software supports it. 2080ti has 2way (100gb/s), quadro cards should have 6way (300gb/s). Look at this page to get your answers https://www.pugetsystems.com/labs/articles/NVLink-on-NVIDIA-GeForce-RTX-2080-2080-Ti-in-Windows-10-1253/
Richard Haseltine said:
Makes sense. I actually found myself using it. Better to know that the GPU failed than wonder why everything has slowed to a crawl. If only to remind me to restart.
VEGA said:
I did some testing with a scene on 2xTitan RTX (+ NVLink) and 441.87 drivers with the latest 4.12 DAZ public beta. DAZ pool size is set to 2.
The scene used ~8.5GB VRAM on my single GTX 1080ti with DAZ 4.12 non-beta.
With SLI disabled, one GPU uses 9.5GB VRAM and the other 8.5GB. Both GPUs are used ~100% in the CUDA category while rendering, but the first GPU is also using 90-100% in the "copy" category (as seen in Windows Task Manager).
With SLI enabled, both GPUs always see identical VRAM usage at all times and both use 8.5GB for the scene. Both GPUs are used 100% CUDA while rendering, but after some initial activity the copy category on the first GPU dropped to 0%. The render finished 40% faster than the non-SLI case (and the only difference was the NVidia control panel SLI setting).
The above behavior is identical even for the 4.12 non-beta version of DAZ (including the 40% faster render time), however the identical VRAM usage is 9.5GB per GPU.
So the SLI setting is definitely doing something on the Titan, which likely applies to the 2080ti too.
I have no idea how to check for memory pooling. I might need a scene larger than 24GB. Only thing I can tell is that non-beta DAZ uses 1GB more in the same setting.
A couple updates/addendums to the info in that Puget Systems article:
All Quadro RTX cards share exactly the same physical NVLink implementation as their equivalent GeForce/Titan RTX counterparts (since they all use exactly the same GPU dies, and NVLink is physically integrated into the GPU die itself on the Turing microarchitecture.) Meaning - interestingly - that the max NVLink bandwidth available on current generation Quadro cards is just 2-way 100GT/s (ie. 300GT/s was not carried over from Volta to the Turing spec.) In terms of potential NVLink capabilities on non-Quadro RTX cards this is a good thing because it means the only thing that differentiates the two product lines is software implementation - not hardware functionality.
TCC mode (what Puget Systems found to be the secret for getting memory pooling to work on GP100/GV100 cards under Windows) is supported on all Titan series cards as well as all Quadros. Meaning that anyone looking to take full advantage of NVLink capabilites on a pair of Titan RTX or Quadro cards should switch them into TCC mode (see this post for instructions on how to do this) in addition to linking them with an NVLink bridge to get memory pooling in Iray to function. Meaning that the only cards for whom SLI should be relevant in getting them to work are GefForce RTX cards.
RayDAnt said:
Switched the Titans to TCC mode to try again (my prior post had them in WDDM mode and driving the displays DAZ was running on). I also had to set my 3rd non-CUDA graphics card as primary in BIOS for the TCC mode to stick.
Sadly, I could not use the same tools to measure VRAM since the usual suspects (GPU-Z, Afterburner) all showed full 100% VRAM usage. Only nvidia-smi.exe showed a relevant number, which indicated 7GB VRAM for one GPU and 5GB VRAM for the other, for the same scene in my previous post. Which is lower. Render times were the same as WDDM mode with SLI enabled.
Still not sure if pooling is active and whether TCC mode is required for the Titan. I only have one 2080ti so I can't confirm if things are different for it (and all my Quadros are pre-NVlink).
Right enough, it does stress "ALL the nodes" but I didn't read that to mean the collision objects too. I guess it makes sense even though it isn't quite intuitive after using dForce in the original manner.
Thanks for putting me right.