Nvidia 980 or 970 SLI
Someone has probably already asked this, but I don't seem to be able to find it so I will ask again. First I am VERY excited by Daz3d 4.8! Love the fact they have gotten a new rendering engine to join with 3Delight. However, in preperatino for the 4.8 release I am thinking about upgrading my video card. I have an old 470. I am thinking about getting either a 980 GTX or 2 970 GTX in SLI. Can anyone advise on how IRay treats SLI configured systems vs Single card installations. Also, does Iray recognize the memory from both cards (7.5 to 8 GB depending on who you talk to) as one memory space in SLI format. Similarly I assume the SLI configuration allows the cards to share both CUDA core resources as one.
I can do 2 970 GTx for $700 with double the memory and 50+ % more Cuda cores.versus the 980 gtx.
Lastly, can any one answer if Daz is going to have a rendering queuing system in place. I would love to queue up 5-10 renders, go to sleep or to work and come back to 10 renders completed.
Thank in advance for any answers anyone provides.
Comments
I'm curious about this as well. I'm going to need to replace my heap eventually.
Iray doesn't use SLI. https://devtalk.nvidia.com/default/topic/493847/iray-needs-neither-sli-nor-cuda-/
Also, there are limits to what cards other than the 980 can do with memory above 3.5GHz
Anandtech have a great explanation for those interested; the gist is that due to binning purposes, the components used in the 970 and presumably the 960 now too, will have issues when dealing with all the memory; I have no idea how this will affect renders, or if scenes that would fit in a 980 might have issues with the 970 and 960. The 970 can access all the memory, just that it might not or may cause performance issues - like I said, I can't remember. As Daz 3D are dealing with nVidia, perhaps they could ask nVidia to shed some light on the situation.
http://www.anandtech.com/show/8935/geforce-gtx-970-correcting-the-specs-exploring-memory-allocation
It's some time since I read the article, and as I wasn't planning on getting the card, I was not specifically reading with regards to rendering; it was generally geek-interest.
Geek interest or not that's some shady sounding mistake Nvidia is claiming.
So you disconnect the SLI strip and run them side by side. Then you can use both, but the VRAM doesn't stack. [strike]you're limited to the lower VRAM, which makes that double VRAM a waste if using both cards.[/strike]
Check out this:
http://www.daz3d.com/batch-render-for-daz-studio-4-and-rib
There is a thread here:
http://www.daz3d.com/forums/discussion/32786/
DraagonStorm assures me that it does work with Iray.
Cheers,
Alex
So you disconnect the SLI strip and run them side by side. Then you can use both, but since VRAM doesn't stack, you're limited to the lower VRAM, which makes that double VRAM a waste if using both cards.
Actually Iray will use both cards as long as the scene fits on the card vram. If it doesn't fit the card will be dropped from the render.
Actually Iray will use both cards as long as the scene fits on the card vram. If it doesn't fit the card will be dropped from the render.
As I understand it, lets say you have 2 cards, with 2 different amounts of VRAM, they will not add up for a lump sum, so you'd only have access to the vram on one card (which I thought would be the lower amount). You may get the speed of both cards, but the scene has to fit in the vram of one, not spread over both.
If I'm wrong, or switching my info around, feel free to show me the light, so to speak. I've read so much stuff over the course of this release that my head is spinning. lol
Actually Iray will use both cards as long as the scene fits on the card vram. If it doesn't fit the card will be dropped from the render.
As I understand it, lets say you have 2 cards, with 2 different amounts of VRAM, they will not add up for a lump sum, so you'd only have access to the vram on one card (which I thought would be the lower amount). You may get the speed of both cards, but the scene has to fit in the vram of one, not spread over both.
If it's too big for one card but fits on the other, then only the smaller card drops out.
Actually Iray will use both cards as long as the scene fits on the card vram. If it doesn't fit the card will be dropped from the render.
As I understand it, lets say you have 2 cards, with 2 different amounts of VRAM, they will not add up for a lump sum, so you'd only have access to the vram on one card (which I thought would be the lower amount). You may get the speed of both cards, but the scene has to fit in the vram of one, not spread over both.
If I'm wrong, or switching my info around, feel free to show me the light, so to speak. I've read so much stuff over the course of this release that my head is spinning. lol
The cards are completely separate. Iray copies the entire scene to every card checked in the render settings; once copied, all cards begin processing the render. If the scene doesn't fit on a card, it gets dropped from the render process. If, during the render, a card runs out of vram it gets dropped from the process at that point.
Ok, so what I'm getting is that it's better to have 2 high vram cards so you can maintain using both since the scene would fit on the first card, 4GB+4GB instead of 2GB+4GB, at least if you plan on using both in the process (plus more cores = more win), and I'm assuming by 'dropping out' you mean that even the cores of that card don't get taken into consideration.
So I was right that it doesn't vram stack, but wrong in how it used the vram in a multi-gpu setup.
Thanks!
Correct. And yes, by 'dropping out' I mean that the card is no longer used for the render and the cores become immaterial. One other aspect to keep in mind - as near as I can tell on my system (gt 740, 4 GB vram but only 384 cores) OpenGL will eat about 500 MB of the card running the monitor(s) and this, for some reason, doesn't show as much as it uses. Right now, driving two 1920 X 1080 monitors the card is showing 128 MB in use.
I'm waiting for DAZ to get their agreement together with the card maker and hoping I can afford a pair of gtx 980s - and I'll keep the 740 just to drive the monitors. (My MB supports multiple gpu cards :-) ).
Feeble as the 740 is, it still contributes about 25% to my renders on an I7 with 6 cores running at 3.5 GHz.
I'm in the same boat as you. My card is 4GB (evga 770 ftw ed.) running 2 1920x1080 monitors. One hdmi and one dvi.
I'm not in the same boat, however thinking the same. I have an 8600GT CUDA v1.1 card that can't do Iray at all. If I replace it with an ultra low watt GT 730 for the monitors, the watts can go to the other cards and CPU doing the Iray stuff.
There are other reasons for the idea that are beneficial for me here, like the watt hogs will be idling (no fan noise) when I'm not in Daz Studio, so I can record music, lol.
So you disconnect the SLI strip and run them side by side. Then you can use both, but the VRAM doesn't stack. [strike]you're limited to the lower VRAM, which makes that double VRAM a waste if using both cards.[/strike]
When you say remove the SLI strip what do you mean exactly? Will the system recognize both cards and use them congruently? DO they have to be the same card , 660 plus a 970 lets say...?
When you say remove the SLI strip what do you mean exactly? Will the system recognize both cards and use them congruently? DO they have to be the same card , 660 plus a 970 lets say...?
You do not have to remove the SLI strip to disable SLI. You can disable SLI through the Nvidia control panel app.
When you say remove the SLI strip what do you mean exactly? Will the system recognize both cards and use them congruently? DO they have to be the same card , 660 plus a 970 lets say...?
You do not have to remove the SLI strip to disable SLI. You can disable SLI through the Nvidia control panel app.
Ah okay. Thank you very much