Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
Yeah, this is one of the disadvantages of TCC mode. Also be aware that cooling system monitoring tools like iCue also lose the ability to read the card's die temp. Meaning that if you have any part of your cooling system set to work off that statistic, you're gonna need to find an alternative.
The reason for the lower VRAM usage overall is because of the absence of WDDM mode. WDDM mode essentially loads an entire game engine instance in the working memory of each connected GPU. And render times should stay virtually the same - as you've found - so long as the scene being rendered fits comfortably in VRAM on both GPUs since this won't trigger memory pooling. The only way to trigger memory pooling is by feeding the cards a scene that is too large to fit within the memory limits of the smallest connected card. In which case you should see a substantial drop in rendering performance (most likely 20% slower or more - as fast as the Titan RTX's NVLink interface is, its internal memory transfer rate is still much faster) without CPU fallback or total failure of the render to complete.
Once you get through with the Titans I highly recommend trying the same TCC testing with whatever Quadros you have. Despite current marketing, the underlying functionality (GPU peer-to-peer communication via a proprietary Nvidia protocol called GPUDirect) that makes things like memory pooling across Nvidia Quadro cards possible isn't unique to NVLink. Prior Quadro card generations are designed to achieve the same functionality via the system's PCIE bus (notice how far back the development timeline in the GPUDirect link above goes...)
Unless Nvidia has enforced some sort of a lock on previous Quadro generation hardware peering with current generation hardware/software (which is possible given how much faster NVLink based devices can communicate) you should be able to get all of your Quadro and Titan cards working together.
I can once again change surface settings with Iray preview on and not have it revert to CPU. Thank you for this update.
I tried out the latest beta (4.12.1.55) and though it sometimes seems to take a bit longer than before to crop up, I'm still getting the same "Illegal Memory Access" CPU fallback error. Un-checking the "Allow CPU Fallback" option under the advanced render settings only makes it so that when the error occurs nothing else can be rendered at all until the Daz Studio is restarted. It's also worth pointing out that the issue has persisted even after I upgraded my video card, CPU, motherboard, memory and hard drive, and regardless of what Nvidia driver version I install.
Fortunately, the last non-beta version (4.12.0.86) shows absolutely no sign of this specific CPU fallback issue, so I'll just keep using that. I just hope that Daz and Nvidia are able to fix the problem before the next general release.
How to run multiple instances now?
The last time I checked they wanted to put in an option to enable multiple instances again. Where do I find it?
Memory Sharing with Texures seems to work.
My Test scene has 3.7GB Materials
If NVLink Peer Group Size is set to 0 or 1, the overall memory usage of my test scene is 10.4GB
With NVLink Peer Group Size 2 the memory amount is reduced to 8.7GB, wich is rduces roughly by the half the Materials.
https://www.daz3d.com/forums/discussion/comment/5112696/#Comment_5112696
I'll be honest after having taken me a week to remove the last beta version of studio & roll back my daz studio public build and gpu drivers to a point where i can use it again. I am very wary about downloading trying this version of beta. you know the once bitten twice shy thing. + I have notice most of the comments made in this thread since the release of the new daz beta have been made by RTX gpu card users. I have 2 founders edition gtx 1080ti SC 11 gig gpu . So I have to ask? Are these improvements & fix meant also for gtx card users as well? because like i said after the last couple of really bad studios installs I think I might be better off to wait until the movie comes out.. does anyone have any input on gtx GPU with this version of daz.beta, before i take the plunge?
That link isn't working ... deref mail something?
https://www.daz3d.com/forums/discussion/comment/5112696/#Comment_5112696
The silence to my question is so reassuring.
It has only been an hour since you asked
Not exactly answering your question, but I have not had issues running 4.12.0.86 or 4.12.1.40 with my GTX 980 TI and driver 430.86
That s encouraging. have you tired to render a sequence or series of images . The last version I could not render animation AT ALL with it that why i ask about this one. I appreciate the input.
Only the nvLink option is specfic to (the hgh end) RTX cards, all the other changes are applicable to all cards (that Iray supports, but that includes yours).
This looked interesting, so I thought I'd give it a go. I constructed a new scene rather than loading an old one, so I can't comment on that issue.
However, on starting the render, I noticed that Iray defaulted to a CPU render, ignoring the GPU. I wondered briefly whether this was just a text issue, but it's definitely rendering on the CPU. Turning Allow CPU Fallback off forces it to use the GPU and I've seen no problems with rendering that way. Which seems to mean that it's falling back to CPU rendering despite the fact that it could handle the render in the GPU without issue. (Also seems to be faster than the last release build when using the GPU.) Adding the ability to disable CPU fallback, appears to have made CPU fallback the default condition.
If you want to upload a simple animation scene, I can try it on my machine with the latest Beta build.
Just double-checked on my system, and am not experiencing this behavior. Rendering with CPU fallback checked results in either GPU rendering, CPU fallback rendering or total failure in that order. Whereas rendering with CPU fallback unchecked results in either GPU rendering or total failure - as expected.
Is there a recommended driver version to go with this Beta release? I'd be interested in user experience, especially with GTX rather than RTX GPUs. For the record, I have the NVidia GTX 1070 with driver version 441.28 but have been toying with the idea of updating to 441.66 as that was said to fix some problems but, again, I'm not sure whether the fixes were for RTX cards, not GTX.
Great info that is what I needed to know from the change log it did not address if the changes were general updates for all GPU's or maybe it did and I missed it because there is so much focus on the RTX cards improvements, But for now i think i may hold off a few days and see what others experience has been with the new beta . the problem i have is I really need the beta version in my arsenal as my second studio for working on animated projects that require i have 2 studios open at the same time. and having one version beta and one pubic build was optimal until that latest version of beta & the new gpu driver that screwed up everything it took me a while to get back to where i was so I am wary of updating anything
Thank you for the offer, I would only need abut 20 to 30 key frames rendered to see if it falls back to cpu . I can bake my animation cycle into the scene and send it to you if you have cake & bob Spanish beach HDRi https://www.daz3d.com/ultrahd-iray-hdri-with-dof--spanish-beach & All the content in the Isla Bikini Bundle https://www.daz3d.com/isla-bikini-bundle Out of touch leony pony tail? https://www.daz3d.com/leony-wet-and-dry-ponytail-hair-for-genesis-3-and-8-females
That is the scene I am working on I use fludos water waves plugin for waves in the hdri but you would not need to add that to the test.
Sorry, I don't do animations and render a single image at a time. I don't, however, close Studio between renders. I will wait until GPU-Z shows the GPU memeory cleared if rendering a large (~4000x6000) image.
403.86 is still the minimum driver. 430.86 is stable and renders w/o problems with my 980 TI.
BTW, is anyone else having trouble installing 4.12.1.55? I can download it, but when I click Ok for the install, the window closes and nothing else happens. The first time it deleted my 4.12.1.40 Beta.
Nevermind. After reboot 4.12.1.55 installed w/o problems.
my obj exports are no longer loading maps from original locations in my other programs like Twinmotion, 3DXchange or Win10 3D viewer
the paths are listed in the mtl file
the release candidate works fine
mtl files appear identical
obj has same reference to mtl file
I don't do animations, but I do use the Timeline for dForce simulations.
The 4.12 Timeline may be a boon for animators, but it's information overload for dForce sims, especially for people who didn't learn on the old Timeline first. It took me a while to get used to it, even though I used the old Timeline for nearly every dForce sim I did since shortly after dForce was first released. In the older beta I was using, there were two views of the Timeline; with the latest beta, I see there are now three.
However, my issue with the two simplified views of the Timeline is there is not enough information. Neither the Basic nor Intermediate views list the objects in the scene. Even when an object is selected, it doesn't show up in the left column of the Timeline pane. If I've moved a figure's arm to prevent it from intersecting the object being draped, but later realize I need to make additional changes to that movement, I'll see the keyframes, but not be able to tell which frames affect the object I need to revise.
Is it not possible to show the objects in the left column with the additional stuff hidden? (i.e. no arrows for expanding.) In other words, is it not possible to provide an optional view that mimics the old Timeline? To me, that should be the Intermediate view, or even a fourth view option: dForce Simplified View. Such an option would make it much easier for new users to learn how to use the Timeline with dForce simulations, (not to mention make it much easier to teach them how to use the Timeline with dForce.)
That isn't new, the old Timeline didn't list nodes either - it simply showed the keys for the selected nodes. The new Intermediate mode is the old Advanced mode (with the option to cntrol the scope of manual key creation).
You don't have to expand things to see the keys - they are shown as the compound triangle icon. When you select a node the list in the Timeline updates to show that node, and you can then click its triangle marker and delete without needing to expand to see the separate properties (unless you want to be selective in deletion, of course, which wasn't possible with the old Timeline).
Rob may be able to look at modifying the way things auto-expand, though he can't promise that, but given that the current (beta) does match the two old modes, plus add the new Advanced mode, another mode on top of that is not likely to be considered.
nobody else had any obj export issues?
I might install notepad++ later on this PC and compare file differences and see what's missing
I don't want to be collecting maps for everything
original paths worked fine before
found it
"inverted commas (quotation marks)" added around the paths in the mtl file
apparently a new option default checked
I have the previous versions saved, just in case.
This covered under OBJ Export in the highlights thread https://www.daz3d.com/forums/discussion/comment/5112696/#Comment_5112696