Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
with this one it took 2 minutes 40 seconds and with SickleYield's test = 1 minute 43 seconds
pc specs
CHRONOS-pro
Corsair 350D
Processors: Intel Core i7 6700K Quad-Core 4.0GHz (4.2GHz TurboBoost)
Motherboard: ASUS Maximus VIII Gene
Windows 10 Pro
Graphic Cards: Dual 8GB NVIDIA GTX 1080
How many Genesis 3 characters with lightning and a background do you think a 8 gig vram card can hold/render ?
And is there anyway of checking how much Vram a scene is going to be, before you hit the render button and check the logs after ?
I've discovered that with my 1070 card it depends upon the camera angle. I rendered the same scene several times from different angles. From some angles it renders very fast and the interations count by like seconds, but if I change the angle they count by like one every 10 seconds. It might be affected by the window in the scene, but only if it's almost dead on.
seriously, your 1070 does in 3 minutes what your 980ti takes 30 minutes to achieve; please pull the other one.
Let's keep this civil, without a descent into tribalism, please.
I had some interesting results with 2 scenes, one that nearly maxed the cards out saw much less of a performance boost.
System with Titan X Pascals is with an Intel i7-6720K CPU. System with Titan X Maxwells is with an AMD FX-8320 CPU.
Scene with 3 Genesis 3 figures (2 visible) saw a 57% increase!
The next is interesting - a scene with 8 Genesis 3 figures (yes, eight) saw only a 16.75% increase. This one I think about peaked out the Titan's RAM - nVidia inspector reported 12 GB usage when I had SubD at 2. I'm sure that was overkill anyway, but all it usually means is another 20-30 seconds setting the scene up so whatever. It kept crashing before finishing. I dropped SubD down to 1, that brought me just down to 11,900 MB usage and then I was able to render the scene.
I replace my GTX 980 with GTX 1070. Renders are pretty good speed, but now it seems the editing window is quite laggy, choppier than my 980. Specs: i7, 24G, latest Daz beta, NVidia drivers 375.63, Direct3D 11.
On the NVIDIA control panel > 3D settings > Adjust image settings: I switched from Let the 3D application decide to Use my preference emphasizing Performance. This didn't seem to help. I get the feeling that I need to run some type of performance test or update some component. Has anyone experienced similar results?
well my specs are
Processors: Intel Core i7 6700K Quad-Core 4.0GHz (4.2GHz TurboBoost)
Motherboard: ASUS Maximus VIII Gene
Windows 10 Pro
Graphic Cards: Dual 8GB NVIDIA GTX 1080
on page 19 of this thread near the bottom I posted a render that had about 33 think characters mix of gen 2 and gen 3 biy much think as it took ages to reload up the scene about 10 minutes I think had one fig 34 had to delete as she kept warping even when deleted and added a brand new character t pose was ok but using another pose would warp her into slenderman proportions but I think that may be the new daz build bot the cards as in the previous full build did not have this problem in fact added more charcters think hitting over 40 but ended up removing a lot as it was getting laggy just setting them up and when reentering daz to continue again took ages to load scene sometimes stalling so even though I could do 30+ figs think around 20 may be best max for me don't know if that helps you
so tried visiting the nvidia offical site and it keeps getting blocked by my antivirus saying it's extremely high risk site even ging via their links on their facebook pages and geforce site
On my GTX 1070 iray rendering speed of close to GTX 950. Why?
PC Configuration:
Intel i5-6600K
GTX 1070
16GB DDR4
AsRock Z170A-X1
Samsung SSD 850 Pro 250GB.
Haha, I ordered a GTX1070 with my new hardware. Compared with my GTX580,I'm using now, I can only win. ;-)
Another render comparison, exact same scene rendered on the two different comps (Maxwell Titan X vs Pascal)
3 Genesis 3 figures, 38.6% faster.
Any idea when the next beta is coming out, Im eager to continue my image series of Taylor Swift
I've found a problem now after testing the beta out with my 1070 card. I will render a scene and the iterations fly by compared to just using my CPU. Then I'll move the camera to a different angle and render again, with the same fast results. I keep doing this, moving the camera around for a different view and then rendering, but then after doing this a number of times (no pattern to the number I've noticed, but probably 7 to 12 times), the GPU will stop being used and the renders wil slow right down. In the box it appears to be using the GPU because all of that GPU information shows up, and there are no errors, but the render just crawls along and what should take a couple of minutes takes an hour or more. So what I do then is I save the scene with the new camera angle I just tried to render, shut down Daz and restart it, and then the same scene and camera angle renders perfectly at high speed again. That tells me that it has nothing to do with my scene and is probably a hick-up somewhere in the Beta.
Are you closing each completed one?
If you are leaving the finished renders open, you are going to eventually run out of memory on the video card and drop to CPU mode...
That's what sounds like is happening, but isn't being reported as happening.
7 or 8? No idea, it depends on what else your G3s wear outside of the landscape and the lights.
But seriously... 2 GB usually is 2 G2 with clothing, hair and landscape/Light. But that's rough estimates, and it pretty much depends on what textures and subdivision you are using.
As for VRam...I've found this one to be very usefull: http://www.daz3d.com/iray-memory-assistant
Are you keeping any of the render windows open? If so, AND you're using optix acceleration, the memory on the card is not freed - so at some point, the next render doesn't have enough memory available to run on the card. You can verify by viewing the log - look for 'memory allocation error' or something similar.
I have done multiple renders on GTX1080 & at one point only when I was testing lots of G3 characters adding 1 at a time then rendering, it fell back to CPU - For me I stopped the render & closed the extra Windows and it went back to GPU unlike the last build where if I did that I had to close Studio & reload the scene so I was pleasantly surprised :)
In my test I got 9 barely dressed characters & a floor into around 7.5 GB (as attached) - My card runs the monitor so I estimate the usage for DAZ at around 7GB - I did not do anything but load & render, so no doubt you could get more by reducing texture sizes or lowering Sub D but I am to lazy to change things for testing :)
Sounds like a possiblility, thanks, I'll test that out.
The only thing though, is that when I render the first time, the box says that I'm only using about 120 megs of my 8gig Vram and that should give me about 40 renders of this size before I run out, and it usually happens before I get to 12. And, it doesn't happen all the time, it's only happened a few times, and then I after I reboot and load the scene I don't have problems with that scene anymore, although I haven't tested to see if your idea is true. Sometimes, I've actually had 2 or 3 instances of Daz open at the same time, rendering different scenes (which seems to work fine), so maybe I've been doing that when this happens. In any case, what you're suggesting sounds very plausible, so I'll do some testing to see if that is the case.
I'm not getting any errors. What if I wasn't using optix, would it free up the Vram then?
See also my reply above to mjc1016, as the answer was similar...:)
All I can say is that I was seeing this with optix accelreation on, and turning it off freed the memory on the card.
Looks like you guys were right :)
I was running out of memory because I had the renders still open. I didn't find the errors because they appear way up the page in the log where the render starts. There is no other alert that this has happened, not even in the little box that tells you what is happening during the render, other than it suddenly starts crawling. Makes me appreciate this new Beta that is allowing me to use my card now as it reminds me of what it was like just a couple of weeks ago everytime this happens.
With Optix on, I run out of memory faster than with it off, and closing some of the renders is all that's needed to fix it. I still can't figure out the memory usage. In some places in the log it says it's allocating a couple of hundred megs, in other places it says it is using 7gigs for a single model in a basic scene. If it's true that it's using 7 gigs, then I shouldn't even get a second render, let alone a dozen, so I'm not sure about if the memory usage is being optimized.
I'm also wondering about animation. If you render an animation as single frames that you assemble later, how do you get Daz to shut down each frame after it's rendered?
Single frames direct to file. So what it does is render the frame as a png, lets say, then moves on to the next one.
So when it moves onto the next one, it saves the one it just did and closes it?
Yep...
Yeh, sorry; I'm not being tribal, merely incredulous.
Sounds incredible, but there is so much variance from render to render, depending upon the scene and the lighting and the camera angle (pointing at lights vs otherwise), and the host computer, etc... Where one render can be extremely fast on one card, another can be the opposite. Remember, we are comparing Maxwell to Pascal, and at the moment, nobody really knows how each platform will perform relative to the other within particular aspects of a render. Since I first wrote what I wrote, there have been more tests, and although my 1070 kicks butt over the 980ti in general, I've found, there have been times where I've seen the reverse happening and I haven't been able to nail it down to anything in particular. Just some scenes are really fast, and others are slower. I'm still experimenting with this to determine what affects what. From what I've read, the 1070 is faster than the 1080 at the moment, but that could be because the 1080 isn't fully operational yet in this "Beta" version, or perhaps the Pascal isn't optimally configured for it yet, or who knows? That's why it's still in Beta and not final release. It could also be because I'm using a "Founders Edition", and the 980ti I tested it against isn't.
I don't have a 980ti, my friend does and I compared rendering the same scene on her computer with her 980ti to rendering on mine with my 1070. It's probably because my GTX 1070 is a "Founders Edition" though, more than anything else :)
The CPU and other hardware also play in this. In the thread for the render test scene, you'll see that the same scene has different render times depending on what CPU is used. Also, the card manufacturer might be an influence to make a card faster or slower.
Yes, that's what I'm saying...
A point people are forgetting is that if Iray Texture Compression is on, the *CPU* will have to convert the textures from jpg/png to raw then compress them before sending them to the VRAM. This is extremely CPU dependent AND a reason it takes so long for the render to start showing results.
Kendall