Adding to Cart…
![](/static/images/logo/daz-logo-main.png)
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
The 900 series cards generally need 200-300 watts; varies on card.
As an example of redering;
I rendered a scene two figures, and background, I just posted it in gallery. It takes the following to do 81% convergence.
970
Finished Rendering
Total Rendering Time: 9 minutes 35.3 seconds
980ti
Finished Rendering
Total Rendering Time: 6 minutes 28.66 seconds
970 and 980ti
Finished Rendering
Total Rendering Time: 3 minutes 41.70 seconds
Now I started it on the CPU; and let go for a bit: Total Rendering Time: 11 minutes 33.97 seconds - 19.8%
That gives an idea on the difference; now I had a scene the other day, that depped to CPU on 970; I tried it on the 980ti and it doesn't have enough memory either; it took about 150 minutes on CPU. Ideally I'd like a Titan, but I'm likely to wait until next gen for that, or until I can pick one up cheap.
Now, with regards to PSUs; if you have a gold rated or platinum; then you can be confident that a 600 watt PSU will run 970 on an i5 or i7 that isn't the performance range and not over clocked; probably be ok overclodked but depends what else you have in the system; I'd certainly not take guesswork before deciding though.
@nicstt thx for your infos - i am surprised that your 970 is just 2 minutes faster then rendering on your CPU? What are the specifications of your system?
HI EVERYONE, I know that this maybe silly but I just bought the Acer Aspire M3450-UR30P for a really good price (100), the mobo is Acer RS880PM-AM, I want to buy the GeForce GTX 750ti or GeForce GTX 690 (if I can find the 690) to help with the iray rendering time, cuz actually it takes from 1hr to 2 1/2hrs on 1100x1400 image size with 2 Genesis2 full dressed with HDR lighting, also I want to upgrade the RAM first to the max and I know that I gonna need a PSU, so plz any help, I don't have a lot of money and I don't know what GPU get, if I can reduce the time to 30min would be nice, also a new pc is off the grip, and this PC works good for me, I made the jump from Dual core, I been reading a lot here and there and everywhere and theres not a really good answer, some say more CUDA is better, others VRAM, and others Processor, btw, the GPU that I want to get it would exclusive for rendering, I not planning to atache any screen.
This are full specs
Product Description: Acer Aspire M3450-UR30P
Processor: AMD FX 4100 / 3.6 GHz ( 3.8 GHz ) ( Quad-Core )
Processor Main Features: AMD Turbo CORE Technology
Cache Memory: 4 MB L2 cache
Cache Per Processor: 4 MB
RAM: 12 GB (installed) / 16 GB (max), DDR3 SDRAM, 1333 Hz, PC3-10600
Hard Drive: 1 TB, standard, Serial ATA-300
Graphics Controller: ATI Radeon HD 4250
OS Provided: Microsoft Windows 7 Home Premium 64-bit Edition (RECENTLY CHANGE TO W10)
Power Provided: 300 Watts
Motherboard: Acer RS880PM-AM
PS: I been researching for this since I bougth the pc (about 2 months ago) and I ready to kill myself jumping from my chair!!XD
BTW Im not a pc gamer, I have an xbox and ps for that.
THNX FOR YOUR TIME IN ADVANCE.
I myself had an ancient Nvidia card with hardly any cuda cores when I wrote that. I went and got something about half as good as what other people were saying Iray needs (all I could afford now) and I get great results but I am still struggling with slowness. I am thinking I have to figure out something with the settings now. The card did really make a huge difference. That might depend though on how bad your card was in the first place.
I expressed my original thought with a winkie emoticon to kind of clue you in that it's a wry comment and not that I am saying anybody should have to buy anything. Just to be clear.
Sorry missed this; I stopped it at, see the bold below.
"Now I started it on the CPU; and let go for a bit: Total Rendering Time: 11 minutes 33.97 seconds - 19.8%"
@Handspan Studios ".. I am still struggling with slowness. I am thinking I have to figure out something with the settings now"
Are you using the default settings? Particularly with the render settings (see attached). I have rarely felt the need to changes these. HTH. I have stopped renders well before convergence if they seem largely noiseless.
This is really interesting, can I ask, what are your other PC Specs?
intel i7 4770k (I have an overclocked profile in the BIOS, but I don't run overclocked, although I might on occasions) It is bad for CPUs over the long-term running overclocked if constantly pushing em to the max as well. My bank balance is more important than my ePeen. :)
16GB RAM (another 16GB would be helpful, but might wait until I do an upgrade)
4 SSDs
Corsair AX860i PSU
Mechanical Keyboard, Good Mouse. 2 x 1440p monitors
A mechanical and a NAS for backups. (Can't have too many backups.)
Has anyone experimented with DazStudio 4.9 Beta yet (which includes the latest build of iRay)? I am curious to know if there is an improvement in render time.
At first I got better, but last two or three builds they were about the same or slightly worse.
I've stopped using 4.9 and have no immediate intention to upgrade.
I have an i5-4690k. Would a i7-4790k make any difference in rendering speed? I would be rendering with a gpu and cpu. I have a 670 now, which I will be upgrading to Pascal when it releases.
Like say, how much faster would a gtxXX70+4790k be vs a gtxXX70+4690k for HD Iray renders? Or would it make more sense to save the ~$100 it would take to buy an i7 and go for a beefy as possible gpu? I plan on getting a xx70 card. That $100 could maybe go towards a xx80 instead. Or perhaps a xx70 with more ram, as it seems we'll have more ram choices this time (I'm predicting there will be as many as 4 different ram configerations: 4, 8, 12, and 16 gb. 12 and 16 gb may be exclusive to xx80 line.) Too many choices!
Brilliantly stated, I was about to harp on this before folks wasted money on cards.
That's a nice system. I agree about overclocking the CPU - for GPU rendering there is little to no benefit and you run the risk of causing it issues / potentially invalidating warrenty.
Did you use OptiX Prime Acceleration with those benchmark times you posted?
I looked into the 900 series wattages.
My ZOTAC GTX 960/4 GB uses a max of 120 watts.
The 970 about 145 watts and the 980 about 165 watts according to the specs on NVIDIA's site.
The Ti's and the Titans need about 250 watts and TItan Zs about 375 watts. Yikes!
You can run a 960 on a 400 watts PSU. I am running mine with a Sentey 750w, 80 Plus, Bronze, Modular PSU. A bit overkill, but I didn't investigate this much prior to purchasing my card.
Never use it. I've found in the tests that it at best makes small improvements but generally none/little or slightly worse.
Not sure when I posted that. :) September, 970 was the most poser-friendly back then iirc, which admittedly is less than 200, always allow some spare. Plus there was my qualifier of generally.
It appears to me that IRay rendering speed pretty much boils down to how many CUDA cores you can throw at it. The parallel processing power of CUDA cores is a perfect match for rendering.
Given that I am wanting to find the best bang for the buck (within a $2000 - $3,500 budget) I am thinking that my next computer should be designed initially for a 980Ti card but be expandable to support 3-way SLI, with the anticipation of later buying two more 980Ti cards. This impacts the initial single card build a few ways and with some upfront costs of an extra $300-$500, including:
So looking at $/CUDA-Core I believe the 3-way SLI 980Ti solution to be the best bang for your buck.
Here is brief comparison of Geforce costs per CUDA-Core:
GTX 980Ti = 2816 Cuda Cores = $600 = $0.21/core
GTX 980 = 2048 Cuda Cores = $480 = $0.23/core
GTX 970 = 1664 Cuda Cores = $300 = $0.19/core
GTX 960 = 1024 Cuda Cores = $180 = $0.18/core
Given the cost per CUDA Core is roughly the same, then the option of using one computer to run 3-GTX980Ti cards, giving you 8448 CUDA cores, and only 1-upgraded computer box to buy, seems like my best bet.
Lastly, using NICSTT’s rendering times posted above, one can estimate the rendering time using a 3-way SLI arrangement of 980Ti cards as follows:
Card
970
980Ti
970 & 980Ti
3-980Ti
Rend Time (secs.)
575
389
222
118
No. Of CUDAs
1664
2816
4480
8448
RT x CC =
956,800
1,095,424
994,560
1,000,000
(bold text = approx..)
I may be overlooking something but looks to me that number of CUDAs is the “core” issue!
Actually...if you can wait a few months, it may be better to wait for the next generation (Pascal) cards...as they should have at least similar core numbers but more memory and/or the price drop on the current generation cards.
Thanks MJC1016 for the heads up on the Pascal boards, had not heard of them yet. Couldn't find specs on number of CUDA cores. Nothing seems to change faster in the PC area than video cards and Pascal looks like a nice improvement, and will be interested to see what impact these have on rendering times. My thoughts were in line with your idea a bit, build a single card box now then hope prices drop on the 2nd and 3rd video card a little later.
I agree that the 980Ti is the way to go. $ to GPU power is better than the 980 but importantly, it has 6 Gigs of Vram, meaning it will be able to cope with larger scenes or more characters in the render. Something else to factor is clock speed, rendering speed is not exlusive to CUDA Cores. Also - Be sure to get a CPU that supports 3 GPU's. The 5820k has 28 PCI lanes, which means it will run 3 GPU's at X8 (total 24 PCI lanes) and leaves 4 lanes left for other PCI stuff. If you get the 5930k, you 40 PCI lanes which means 2 Cards running at X16 and 1 at X8. After doing a fair bit of research it seems some people have noticed running the GPU at X16 is faster than X8. This is not so relevant for gaming as the benchmark suggests there is little difference, but for rendering, people have said there is a noticeable difference. If you are going to fork out over $2500 on GPU's, maybe it's worth getting the 40 lane CPU.
Risa.kb3, you make an interesting point, I would not have looked at the CPU pipe. I was in fact looking at the 5820k CPU but now have to research the 5930k which is another $150 add cost. I did see some have noted that even PCI 3 v 2 is not significant (maybe referrring to gaming again). Pretty confusing indeed, and I try to stay up on this stuff. More research needed to figure that one out, thanks for the info!
I thought the same as you, the benchmarks clearly show that PCI 3.0 at X8 (which is equal to PCI 2.0 at X16) would be more than enough to not bottleneck the GPU, but I read a few posts where people said they had noticeable render speed increases when upgrading their CPU from a decent 16 Lane to the 5930k (both rendering GPU Only). It's frustraiting that there is not more benchmarking on this to prove this is 100% accurate. I also found a Daz article about picking a GPU that also states that it would be best to make sure your GPU's are running at X16 to get maximum benefit - whether that's with PCI 2.0 or 3.0 in mind I don't know.
The Pascal gpus have a new type of RAM which will allow them to stack more RAM on the card. Nvidia has already stated that consumer versions could feature as much as 16gb, while workstation cards could feature up to 32gb. So yes, it could be well worth waiting to see what options they have. I expect to see 4, 8, 12 and 16gb options available in time, with 8gb being the new "normal." Maybe not all that once though, because there may be a shortage of the HBM2 RAM chips available for a while. So there will be a high premium on a 16gb model when it does ship.
You can be certain that the series will offer at least a few more CUDA cores as well.
There is a second advantage to Pascal cards. Nvidia Link, supposedly replacing PCI-e 3 for video cards, and allowing enough bandwidth for the Video cards to use motherboard ram for CUDA tasks.
Possible concerns over Pascal. :)
http://www.overclock3d.net/articles/gpu_displays/nvidia_pascal_mia_at_ces_reportedly_in_trouble/1
Not sure where you got that NVLink is replacing PCIe-3.0. NVLink is the replacement for SLI, which does take inter-card communication off the PCIe bus. But the cards are still PCIe-3.0 cards.
http://www.nvidia.com/object/nvlink.html and http://blogs.nvidia.com/blog/2014/11/14/what-is-nvlink/
Ah. Read the linked whitepaper (link is on the first linked page). While there are plans (for the big exascale supercomputers) to have NVLink connections on the backplanes, consumer plans for having NVLink on motherboards is considerably in the future. All current designs are based on PCIe-3.0 for CPU to GPU connection. NVLink is GPU-to-GPU only. To quote from the whitepaper:
In the future, is definitely subjective. LOL. Especially with Computer architecture. :)
True. However, based on past history, the bus design for standard x86/x64 motherboards is pretty slow to change. ISA was put out in 1984, followed by EISA in 1988 (but was still backwards compatible). VLB (VESA local bus) was in 1992 and worked alongside EISA. PCI came out in 1992 at v1.0 (and went up to v3.0 in 2004.) AGP came out in 1997, PCI-X came out in 1998. Both AGP and PCI-X existed alongside standard PCI. PCI-E started in 2004, with v2.0 being released in 2007 and v3.0 in 2010.
So I would expect to see NVLink appearing on a few high-end motherboards within a couple of years, alongside existing PCI-E slots. Significant adoption will be slower, since it is tied to nVidia (and AMD will come up with it's own equivalent), whereas PCI-E is an open standard. And PCI-E 4.0 is expected to finalize in 2017. (And PCI-E 4.0 is backwards compatible with 3.0, and doubles the bandwidth.)
NVLink is a great idea, but a more open standard will more likely appeal to the board manufacturers as it simplifies construction and doesn't alienate half the market. The current major board manufacturers probably dislike the whole SLI/Crossfire dichotomy that nVidia/ATI have forced into existence, as it requires them to design/build two separate motherboards to accomodate both protocols. A few have tried to support both on the same board, but that has usually not worked as well as dedicated support.