Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
Anyone else having some ... Odd memory issues with the new cards and iray ?
Now I'm a total newb, and either this is prefectly normal, or I'm doing stuff completely wrong. But there are things that feel ... Off to me.
For example I'm getting a lot of warnings from windows to close down some apps to free critical memory when rendering.
Here's the thing. I have 16 gigs of RAM and a gtx 1070 with 8 gigs of VRAM.
when rendering my scene, according to several apps I'm using up about 10.9G of RAM, and in this case nothing else is open , just daz rendering. my VRAM on the other hand, is filled up to 3.6G while rendering. not even close to being filled up.
when I stop the render my RAM drops down to about 3.9G and my VRAM is almost empty.
The scene itself has a small building, 2 G3 models, a handfull of props, nothing really special. one mesh ligh and I'm using DOF.
is this normal behavior ? or does this seem odd to other people aswell ?
It's not you and it's not normal...but it is Windows being picky. Especially if Windows isn't managing virtual memory. Windows reserves large chunks of memory for itself, so if things start getting close to needing more memory than is left over after all the reserved memory is accounted for it will start issuing warnings, if there isn't enough virtual memory to make up the difference.
A quick question for those who may know -
I currently have a GTX 1060 in my PC, and I'm thinking about adding a 1080 to it to get more speed.
Will I be able to use both cards during a render, or would they need to have SLI enabled?
What speed increase could I theroatically expect? Doube, tripple, infinity?
Thanks! :)
No need for SLI...in fact Iray does not do well with it enabled. Yes, both will be used as long as the scene fits on the card. If the scene uses more memory than the card has, that card will be dropped and not used. So if you have 4 GB and an 8 GB both cards will be used on scenes 4 GB or less. Over 4 GB and that card will drop and just the 8 GB one will be used. If it's greater than 8 GB, then it will be rendered in CPU mode only.
So, it's kind of hard to say just how much of a boost. With two identical cards, just for rendering (not using one to drive the monitor), it's almost but not quite 2x as fast...in some cases, it's a bit faster and others, a bit slower.
Great - thanks for the info! Just what I had hoped for. :)
In case anyone's interested here some benchmarks: DAZ Studio 4.9 Version 4.9.3.128 (Public Build)
Board: Asus X99-E WS/USB 3.1; CPU: Intel i7-6850K 6Cores/12Threads (overclocked 4400Mhz); RAM: 32GB (4x 8GB, 2400MHz DDR4)
GPU1: MSI Seahawk GTX 1080, 8GB-GDDR5-10108MHz, CoreClock: 1708MHzBase/1847MHzBoost CUDA Cores: 2560 (Pascal)
GPU2: Gigabyte Windforce GTX 980 Ti, 6GB-GDDR5, CoreClock: 1190MHzBase/1279MHzBoost CUDA Cores: 2816 (Kepler)
GPU3: Gigabyte Windforce GTX 780, 3GB-GDDR5, CoreClock: 954MHzBase/1006MHzBoost CUDA Cores: 2301 (Kepler)
GPU4: Zotac GTX 980 Ti AMP! Extreme, 6GB-GDDR5, CoreClock: 1253MHzBase/1355MHzBoost CUDA Cores: 2816 (Kepler)
Scene with 1268 Iray Iterations:
CPU only = 64:28
GPU1 only = 09:54
GPU2 only = 09:58
GPU3 only = 26:04
GPU4 only = 09:41
GPU2+GPU4 without SLI, withot CPU, without OptiX = 05:14
GPU2+GPU4 without SLI, without CPU with OptiX = 04:22
GPU2+GPU4 with SLI, without CPU, without OptiX = 05:12
GPU2+GPU4 with SLI, without CPU with OptiX = 04:19
GPU1+2+3+4 with SLI, without CPU without OptiX = 03:23
GPU1+2+3+4 with SLI, without CPU with OptiX = 02:51
GPU1+2+3+4 with SLI, with CPU and without OptiX = 03:29
GPU1+2+3+4 with SLI, with CPU and with OptiX = 02:54
This is great info - thanks for that!
What I gather from your benchmarks is that OptiX does provide a small performance improvement, while adding CPU and/or SLI do not.
Good to know. :)
Do your own testing. On my computer, (same motherboard as vonbraun, but with an Intel i7-6900K 8Cores/16Threads (not overclocked) ; 32GB RAM (2x 16GB, DDR4-3000); MSI Armor GTX 1080,) OptiX slows down rendering. And perhaps because of the power of my CPU, rendering with the CPU improves render times as well... (But as namffuak points out in another thread, the extra speed may not be worth using the cpu at 100% for long renders.)
(For anyone reading this who's new-ish to Iray: when running benchmark tests, in the Progressive Rendering under Render Settings, set Quality to "Off" and Max Time to "0". Then set Max Samples to whatever you consider to be a reasonable value, like 1268 as vonbraun used, or whatever. That way your scene will always render to the Max Samples.)
Here are my results from Iray Starter Scene: Post Your Benchmarks from SickleYield
Specs: ASRock X99 Extreme 6,i7-5820k 4.5GHz OC,Zotac GTX 1080 AMP! core clock 2050MHz/5200MHz at VRAM,EVGA Titan X SC 1490Mhz
GTX TITAN X with GTX1080 1 minutes 33.75 seconds
GTX1080 2 minutes 50.53 seconds
TITAN X 2 minutes 32.77 seconds
CPU i7-5820k OC 4.5GHz 22 minutes 19.97 seconds
Hope this helps
Thanks,Jura
Is anyone using these cards with IRAY and getting very grainy renders even when the settings samples time etc are maxed out and the lighting bright, one render did 15000 iterations and still managed to look grainy when reaching 100% (my card is an Nvida 1060 6 gig of ram)
I'm looking to buy a new machine this weekend. I've been doing ok with the 860M in my laptop, but it looks like I'll be getting a new laptop with a 1060, with 6 gigs video memory instead of the 2 gigs I have now. Will that probably be enough for most scenes? Sure, a 1070 or 1080 would be nice, but I don't think I can swing that budget.
The other thing I'm looking at is regular ram. System I'm looking at has 12 gigs, or I can spend another $200 to boost it to 16 gigs. Recomendations?
normally laptops are seriously crippled when it comes to graphics rendering unless you spend a hell of a lot of money which one were you looking at ? plus the more memory the better
Looking at the MSI GE72VR Apache Pro-009 or the -010. Also looked at their Stealth product line, but Apache seems like a good compramise between the portability I need and the durability & heat management of a larger laptop.
I'm currently using an Acer Aspire V17 Nitro-Black. Only one project so far really pushed its memory limits, but several have dumped to the cpu. Replacing it mostly because the USB ports are failing and some networking issues.
@thistledownsname
I actually went with a Alienware laptop due to the Amplifier box. I do all the staging while I am on travel on the integrated 970m. Then when I come back from my travel, connect to the amplifier with a Maxwell TitanX in it and render at a proper resolution using batch render.
This thingy
I was considering selling my 970m based one and getting into a 1070 or 1080 (when released) one and still using the amplifier when I get home.
nice! yeah I have an asus laptop with the same nvida chipset as yours I think they only have like 100 cuda cores your certainly notice the speed with the 1060 but you do need
at least 16g of system ram to do anything big on it
1280 cores looks like, but the 3GB ram makes me sad :P
Hmm. I didn't know about anything like that amplifier box. I'll have to look at my notes on the Alienware system when I get home. Unfortunately the box dimensions on the comparable Alienware systems are bigger and almost twice the weight.
It's not a light laptop at all. But, the ability to stick a desktop card on is REALLY nice. Keeps the laptop relevant.
PureDigital101, that happens all the time, depending on the amount of light in the scene, or tone mapping settings. If you want to render your scene longer, it's really easy. Follow the instructions quoted below from an earlier post, and set your Max Samples to 15000. (If you need longer, you can change/remove limits in the parameter settings for Max Samples.)
(In the latest beta release, they've added a way to turn off Max Samples, too. So with the next stable Release, it looks like you'll be able to set Iray to render indefinitely!)
my mistake the one on my laptop is an 820m
thanks for the reply but it only happens on my second pc which is running the beta version and has the 1060 card and it happens on even the brightest lighted ones , plus I always have my samples at max my other pc has 750ti and though takes a lot longer renders the identical set ups with no noise at all
Compare ALL the render settings...pay special attention to the Filters section.
thanks for the reply but it only happens on my second pc which is running the beta version and has the 1060 card and it happens on even the brightest lighted ones , plus I always have my samples at max my other pc has 720ti and though takes a lot longer renders the identical set ups with no noise at all
never had to touch the filters sectionbefore but I will take a look thanks :)
I don't have the beta installed (I have one of those CPUs that it doesn't work quite right with), so I can't check to see if they are the same...usually they should be, but every once in a while a beta will have differing defaults.
yeah the're the same any idea the best settings for those?
For most things, the defaults are fine...and in Optimization caustics should be off, unless you are doing gems/glasses of liquids/water as featured items; most of the time Architectural should be off, for some interior scenes it may be helpful.
You're welcome. I'm stumped as to why it would be different between the beta running a Pascal architecture card and (I assume) the released version running a Maxwell architecture card. Until I recently bought a new computer, all my renders were CPU only, so I can't test it to try to duplicate the problem. I can say, CPU only or 1080, I've not seen any grainy images after 15K samples. And I tend to like the drama of dark, high contrast images.
Any possibility the 1060 is defective? (I know. Bite my tongue...) Maybe there are some diagnostics you could run...? Just grasping at straws, here. I hope it turns out to be some setting or another that resolves the problem for you.
980ti here, and yes this happens sometimes, and so far not sure why; lighting might be an issue, and so to might be the shader settings. But I'm by no means certain.
I don't have to max out samples or anything, I just get a very grainy render sometimes compared to the same scene with a different character. I have to up the settings (ratio % and quality) to get it to finish.
I do hope not the pc is less than 2 weeks old i'm hoping it's just the beta version that is causing this , the card is actually nivias own and hasn't been messed with over clocked etc thanks for your help though I will see if I can post a screen shot
of both machines renders