Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
I never said you recommended a replacing the 960 with a 980 solo, I didn't address your post at all. Sorry if that confused anyone.
But I do not consider any x80 model to be a "budget" card. So when somebody says they are on a budget, I do not recommend them, that is all. Most x80 cards are several hundred dollars, even the old ones used. I wouldn't recommend a 680, unless it is a 4 gb version, but the 4 gb version commands a premium price. At that price you can buy a used 970/980 instead. The same goes for the 780 and the 780ti. The 780ti has a 6 gb version, but it is very rare, and commands an even higher premium, so you'd be better off trying to get a used 980ti instead. So then you get to the 900 series. Oddly enough, the 960's I see on ebay are not much cheaper than the 970's. I go by the "buy it now" prices, as I hate bidding on stuff.
Nvidia's tiers are pretty consistent, but the steps are not equal. Going up the tiers, the differences between these cards is very steep...x40>x50>x60>x70. A 1070 absolutely smokes a 1060, and a 970 smokes a 960. Very true. BUT, going from a x70 to x80 is not that large of a gap compared to the other models in the series, especially for the price that the x80 model commands. The 670 is really darn close to the 680, so close that some people can overclock a 670 to match a stock 680 (in gaming.) The 970 isn't that far off a 980, and again, the 1070 isn't that far off a 1080, either. They even have the same memory capacity.
These prices are all used. Right now, I see 960's going for $150 at the very best. Most seem to skirt $200, which is shockingly high. 970's are very competitive in pricing, and the reason for that is because millions of them exist. 970's are consistently $200 or less now, making them a great deal. I have seen them as low as $150-160. 980's are usually about $300 or more. The cheapest I have seen was $260. 980ti's still command high prices, and most are over $400. I saw one at $370. That's not bad if you can find one that cheap, but $370 is still a pretty big chunk of change, imo.
For 1000 series cards, I would just buy one brand new, personally, if I was going to buy one. There is not much of a discount buying them used. So buy one new to take advantage of the full warranty package. Though sometimes warranties do transfer, so that is one more thing to look at when buying a used card, ask if the warranty is still valid.
So after all of that, imo, a used 970 can be a great deal for those on a budget. You can do better, but like all things, better costs more. We all have to decide for ourselves what we can afford.
i7 6700
16GB DDR4 2133
Asus GTX 1070 8GB Dual Fan OC Edition
GPU only, OptiX on
Total Rendering Time: 3 minutes 7.80 seconds
1.0 IRAY rend info : CUDA device 0 (GeForce GTX 1070): 4878 iterations, 12.487s init, 174.273s render
Amazon Web Services G2.2xlarge instance - Grid K520 8gb(4GB used as dual GPU), 8 vcpu's (from Xeon E5-2670 cpu) with 15GB ram
Render time disappointing - 21 minutes
This was the lowest powered GPU instance on AWS -Still I hoped for better, maybe it wasnt using all cuda cores? will re-check
EDIT Ok -so you do only get 1 GPU from the K520 = a GK104 GPU with 1536 cuda cores
Is that really just rendering time or does that include time to upload the data?
Rendering time only - from the log, but it didnt take long to upload to the server. The initial setup was a pain to be honest, costing at me at least 15 dollars (for iray server set-up time, ftp for download etc) but now its done I only have to worry about updates
Amazon Web Services C4.8xlarge instance 36 vCPU (from Xeon E5-2666 v3 Haswell CPUs) 60GB Ram
iray server on CPU only - 9 minutes 55 seconds
I'll do the larger GPU instance test tomorrow...
Here, regardles of what anyonoe thinks, I'm sharing what I'm actually experiencing when using the 680 and 980 ti combined. I've heard how it's supposed to work, but I'm showing you what I'm actually experiencing. I borrowed this scene from a friend, and rendered it out at 6000 x 1125 pixels for reduction to 5760 x 1080 for three-monitor wallpaper. Because people tend not to believe something unless they see it with their own eyes, especially when they are being told something different, please take a look at the picture I've attached. What you will see when you do is something that differs from the common wisdom being espoused on how Iray works with multiple GPU's in Daz Studio. I don't know, but maybe something changed when they updated the software. I'm using Daz 4.9.3.166 Pro 64 bit. System specs are visible in the image, but here's how I roll:
Windows 7 64 bit Ultimate
Core I5 2500K Sandy Bridge @ 3.3 GHz
32 Gigs of DDR3 RAM at 1600 MHz (just upgraded)
Nvidia GeForce GTX 680 w/2 Gigs VRAM
Nvidia GeForce GTX 980 ti w/6 Gigs VRAM
What you see when you look at the image is a screenshot taken during the render showing the following details:
CPU engaged across 4 cores at 67%. System RAM consumed up to 13 GB. GTX 680 engaged at 100% GPU Load with 2031 MB VRAM used. GTX 980 ti engaged at 96% GPU load with 2930 MB VRAM used. Displays are being supported by the 680--so the 680 card does not have its full VRAM available for rendering the scene--but as you can see, although the software is using more memory from the 980ti than is available to both cards during the render, the 680 has not dropped out. If the scene demanded much more than this from my system, I fully expect that it would have dropped to 980 solo or even CPU rendering... but it didn't, so in this case, it appears that RAM can stack somewhat. In any case, it did not drop to the 980 ti alone once the memory capacity of the 680 was met. The larger bucket VRAM of the 980 ti sitting in support of the display driving 680 appears to have worked the way I hoped it would. Second image attached was taken of task manager to show how much system RAM Daz in particular was using.
I understand why others might not consider -80 series cards budget, but I disagree for very specific reasons. I've owned lower spec cards and was always disappointed. Whether you dislike the way Nvidia does business or not, I frankly find it very bothersome buying something that the manufacturer has deliberately crippled just to sell at a lower price point, which is the case with -70 series cards quite often. Same chip capabilities if you look into it, except for a portion of it being dis-abled. I was never able to accept the fact that I paid for something that should've been able to run with the -80 series cards had it not been for the manufacturer ensuring that only those who spent more money got the full power of their tech. As a result, I'd rather buy an older -80 series card--and run two of them--than drop money on something that I will ultimately be disatisfied with later. The guy was asking about speeding up on a budget, and I shared what I know. Budget is relative. It's actually all about priorities. Frankly, if you don't mind taking a risk on a used card, you can get something very nice for a lot less than Nvidia and the others are asking, plus you get the sense of good will that comes from supporting everyday people while not paying top dollar to the manufacturers.
I've added a third pic for those who might wonder how much is running on the 980 from other system processes--this pic shows Daz Studio open without anything rendering--same scene, all other software running same as before. 980 ti shows 0% GPU Load and 56 MB VRAM usage. So, as you can see, nothing else is using up that memory during the render. The facts are exactly as I've related them. Why? I have no idea, but it works.
2,031MB is less than 2GB (2,048MB) so I'm not sure what you are trying to say here.
Okay, right now, rendering that scene, the actual VRAM used on the 680 is varying between 2036-2044, while the amount running on the 980 is varying and currently shows 2931 MB used. From what I understood, the two GPU's should only remain engaged in the render if the 2 Gig limit of the lower card is not exceeded because as others state, the RAM does not stack. That would mean that neither card can exceed the 2 Gig limit. Clearly, the rendering is running over 2 Gigs total, whether every last ounce of the weaker card is being squeezed or not. So, I'm saying the weaker card is not dropping out when the scene render demands exceed 2 Gigs. That's what I'm saying. The 980 is pulling close to 3 Gigs. How is that not stacking to a degree? That's substantially different to my mind from the way it seemed to be presented to me, in which case each card could pull 2 Gigs max in a render while still keeping both cards engaged. Rather than a 4 Gig limit, my system is clearly showing multi-card render capable of nearing 5 Gigs, and I'm saying that I've rendered images with the 980 pulling over 3 Gigs, so I'm not sure what the limit is before the 680 drops out. I'm sharing this because all that talk about not stacking might discourage some from upgrading/adding a second card, but my system does not drop the weaker card when it hits the 4 Gig limit that I thought there would be. I'm not sugggesting that I could stack my GPU VRAM and get 8 Gigs on a multi-card render, but I'm saying that clearly no-stacking is not what's happening either. Just to clarify, I don't think anything I said suggested that 2,031 MB exceeded 2 Gigs. I may be stupid, but I'm not that stupid.
Hi InfiniteSuns.
You said: "two GPU's should only remain engaged in the render if the 2 Gig limit of the lower card is not exceeded because as others state, the RAM does not stack. That would mean that neither card can exceed the 2 Gig limit. "
You used the word 'neither'. Here's what I've been told: If GPU A's memory is exceeded, GPU B will still be utilized assuming it has enough memory to handle the scene. Basically all (copies of) the scene's geometry, textures, etc are loaded into every card. If any card fails to load the scene due to exceeding memory, Daz still attempts to load it into the other card(s) on the system, utilizing those where it fits.
As for why your 680 remains engaged, I cannot answer that. I suspect it's because you are right on the boarder of its 2 GB limit. If you added another figure to your scene, bumping the requirement to 3+GB total memory needed, I suspect you'd see the 680 drop out and the 980 continue to be used.
I have a couple of identical cards, 8 GB each, and GPUZ does show different amounts of memory used on each card, which I find a bit odd, but I'm not claiming to be a system engineer. After all, one card supports my monitor and there are probably other things occurring which impact.
Hopefully this response didn't come across as dismissive, I'm just sharing the things I've heard/experienced regarding the utilization of the cards.
It may be that there is more superfluous stuff on the 980 - it's less of an issue in Windows 7 than in 10, but I believe that a certain amount of RAM is reserved for each output that the card can drive even if nothing is connected. I agree the numbers do look odd, but I don't think they show the 980 taking up the slack for the 680 in any way.
I get that the second GPU remains until utilized to its fullest. When I said neither, I was talking about the dropping of the first GPU as the 4 Gig threshold I was expecting was crossed. That did not happen. What I'm not getting is how the memory is not stacking if the figures I'm seeing are correct. You suggest it is an anomaly or error. I will not rule that out, but I cannot know which it is without further testing. I only know what the tools are reporting. In that case, Daz or Iray is not handling the memory usage as it should. If the 680 is not being utilized, perhaps it should be freed up.
Revisiting the issue--with Daz closed, tools show my system using 260 MB for display off the weaker card. Opening Daz without loading anything into the scene brings the usage up to 478 MB on the 680. Opening the scene I used brings the 680 up to 614 MB VRAM used. Only 56 VRAM used on the 980 ti.Upon rendering, there is no information regarding failure to utilize the first GPU. The GPU loads to max, and the memory fills, as seen in the GPU-Z tool. The render history sub-window, whatever the proper name for it is, gives no indication of error. However, checking the troubleshooting log gives another view entirely--and very upsetting that this is not addressed properly:
2017-01-17 10:00:17.324 Iray INFO - module:category(IRAY:RENDER): 1.2 IRAY rend info : CUDA device 0 (GeForce GTX 980 Ti): Scene processed in 43.331s
2017-01-17 10:00:17.340 Iray INFO - module:category(IRAY:RENDER): 1.2 IRAY rend info : CUDA device 0 (GeForce GTX 980 Ti): Allocated 154.496 MiB for frame buffer
2017-01-17 10:00:17.402 Iray INFO - module:category(IRAY:RENDER): 1.2 IRAY rend info : CUDA device 0 (GeForce GTX 980 Ti): Allocated 848 MiB of work space (1024k active samples in 0.059s)
2017-01-17 10:00:17.964 Iray INFO - module:category(IRAY:RENDER): 1.3 IRAY rend info : CUDA device 1 (GeForce GTX 680): Scene processed in 43.970s
2017-01-17 10:00:17.964 WARNING: dzneuraymgr.cpp(307): Iray ERROR - module:category(IRAY:RENDER): 1.3 IRAY rend error: CUDA device 1 (GeForce GTX 680): out of memory (while allocating memory)
2017-01-17 10:00:17.964 WARNING: dzneuraymgr.cpp(307): Iray ERROR - module:category(IRAY:RENDER): 1.3 IRAY rend error: CUDA device 1 (GeForce GTX 680): Failed to allocate 903.56 MiB
2017-01-17 10:00:19.228 Iray INFO - module:category(IRAY:RENDER): 1.3 IRAY rend info : CUDA device 1 (GeForce GTX 680): Allocated 154.496 MiB for frame buffer
2017-01-17 10:00:19.337 Iray INFO - module:category(IRAY:RENDER): 1.3 IRAY rend info : CUDA device 1 (GeForce GTX 680): Used for display, optimizing for interactive usage (performance could be sacrificed)
2017-01-17 10:00:20.149 Iray INFO - module:category(IRAY:RENDER): 1.2 IRAY rend info : Allocating 1 layer frame buffer
I have seen in the log before where the render engine noted that memory was available on the first GPU and re-utilized it where it had been dropped previously--but this is confusing. It says here that it is being optimized for interactive usage. So... Is it being utilized or not? GPU-Z and the card both seem to think it has a job to do here. Daz says maybe. Expert opinions say otherwise. What gives? Since this is going so far astray, I will not take up more of this thread to touch on this. I would dearly love to know what the actual deal is, however, as the tools suggest that something more is happening here than is easily explained.
Do you have your viewport in DS set for iray interactive? If so, then what you may be seeing is when the second GPU fails to load the scene data for your final render, it defaults back to being used for the viewport, which is consuming less vram due to the lower quality and image size settings, and therefor your scene is able to fit in vram for DS viewport rendering. Possibly what you are seeing is the DS viewport being rendered at the same time as your main render, so both cards are rendering, but they aren't rendering the same image?
IMHO if nvidia had managed to enable "stacking" of GPU ram, it would be a hot topic on their forums, and also in a lot of media hype. To better test your theory, you need to make a scene that definitely won't fit on the smaller card, but should fit in combined GPU memory (also make sure you aren't using iray in any open DS viewports).
AWS G2.8xlarge instance - 4 GK104 GPU 4GB (Grid K520), 32 vCPU's (from Xeon E5-2670 cpu's) 60GB Ram
*1 of the gpu's wasnt being used, I dont know why, GPUZ showed it blank too
Render Time - 6 minutes 45 seconds
AWS P2.xlarge instance - 1 GPU 12GB (from nvidia K80), 4vCPU's (from Xeon E5-2686v4 cpu) 61GB Ram
Render Time - 8 minutes
edit - With DS Instance Optimization set to speed Render Time - 5 minutes 6 seconds
*iray server warns that ECC is not turned off -but there seems no way to do that on aws
Important note....it has been determined that the verbose Iray details in the log are reporting UNCOMPRESSED texture sizes (Iray does compress textures internally on the card, there are threshold settings for none, medium, and high compression in the Iray settings). So even though you SEE Iray saying it's 7GB of textures in the log file, that doesn't mean that's how much it takes up on the card.
Now, as to why it's saying it's getting out-of-memory errors, and thus shouldn't use the card, but still is......that's still a good question. Really need some input from the DAZ devs on this one.
AWS P2.8xlarge instance - 8 GPU's 12 GB (from nvidia K80), 32 vCPU's (from Xeon E5-2686v4 cpu), 488 GB Ram
Render Time - 1 minute 14 seconds
edit - with DS Instance Optimization set to speed - Render Time - 54 seconds
Wait. 8 GPUs?? Surely that would render faster than 1 minute?!
Still, that render time is impressive enough to make me wonder how much it costs.
I hoped for faster but still its pretty fast, expensive though at $8.67 per Hour (windows) -I just wanted to complete my tests.
Not going up to the next level P2.16xlarge is $17.44 per hour !! well .....never say never
Here's mine
I7 - 4770 CPU @ 3.4 GHz
1 GeForce GTX970 4 Gig
CPU + GPU Total Rendering Time: 7 minutes 9.67 seconds
oooH ...gonna have to update my aws results, did the original benchmark settings set instance optimization "memory" ? I thought I'd check again - and mine is.
System: 2x xeon e5-2670v1 (each 8core w/ HT @ 2.6 ghz static, 20m cache), 64g ecc, 8g gtx 1070 (simultaneously running 1xHD + 1xUHD monitors and rendering), win/daz running from ssd. Times in m:ss to 100% complete tested with various combinations of system render settings (no scene settings changed from defaults for the test):
2:48 optix on, cpu+gpu, optimized for speed, cpu on for both photoreal and interactive
2:57 optix on, cpu+gpu, optimized for memory, cpu on for both photoreal and interactive
3:07 optix on, gpu only, optimized for speed, cpu on for both photoreal and interactive
3:11 optix off, cpu+gpu, optimized for speed, cpu on for both photoreal and interactive
3:16 optix on, cpu+gpu, optimized for memory, cpu off for interactive (FYI, this and other testing found no noticeable difference between cpu being on/off for general UI lagginess in larger scenes for me)
19:36 optix on, cpu only, optimized for speed, cpu on for both photoreal and interactive
Notes: unaltered render scene settings, computer was basically left alone during each test though a bunch of other apps were open, system ram usage peaked around 13gb, gpu and cpu usage consistently >95% utilization the whole time after the initial ~40sec setup portion (though only on a one or a few cpu cores and 0% gpu for the setup period -- after about 40sec, all 32 cpu cores and the gpu shoot to >95% utilization until the scene is complete, which is likely just the sequential vs. parallel nature of the setup vs compute processes).
My findings: my results compared to others suggests that GPU architecture makes the largest difference in overall render time, GPU model within a sequence makes a big difference, and finally, CPU cores+speed make a decent difference. It also seems like there is no obvious reason to disable optix or not use both cpu and gpu unless you experience issues. My CPUs are older than many listed in this thread, but my render times are faster than many, suggesting that both GPU age and core count are reasonably-important variables for render time, though for how laggy the onscreen experience is when there are multiple scene elements, high per-core ghz would probably make for a smoother UI experience.
2017-01-21 19:14:27.331 Finished Rendering
2017-01-21 19:14:27.362 Total Rendering Time: 3 minutes 13.47 seconds
2017-01-21 19:14:27.378 Loaded image r.png
2017-01-21 19:14:41.166 Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : Device statistics:
2017-01-21 19:14:41.166 Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 0 (GeForce GTX 1070): 4852 iterations, 13.944s init, 178.145s render
I have a new laptop coming with a GTX1060 in it so I wanted to get a good banchmark to see how it compares to my current setup. My current laptop has a GT 750M video card with 2 GB memory. I ran a render using the unmodidifed starter scene (i.e. with all the spheres). After 3 minutes it was at 90% but looked terrible. Full render to copletion per the log: Total Rendering Time: 30 minutes 29.77 seconds
Now once I get the new machine I'll be able to see how it compares. I'm looking forward to substantially faster renders!
Is it the 3gb or 6gb version of the 1060? Either one would be a big improvement, but the 6gb model not only doubles the vram, it has more cores. They don't even use the same chip if I remember correctly, so it is very odd that Nvidia named them both the 1060. IMO, they should have called the 6gb model the 1060ti to make the difference more apparent (not to mention the marketing potential.) But anyway...
At any rate, you should be pretty hyped! My prediction: 6-12 minutes for the test scene, depending on which model you got. The 6gb version might even dip into the 5 minute range if you got a new i7 with it.
Oh, and I am sure you are aware, but just in case, the 1000 series only works with the latest version of Daz and its beta that released last month.
It's a 6 GB GT1060 and a new Kaby Lake i7 CPU, albeit the laptop version so optimized more for power consumption than raw computational power. I toyed with getting the next step up, the laptop with the GT1070 in it (8 glorious GB!) but it was $200 more and I just felt like it was too much for that one feature. This is already a pricey rig because ... gaming laptop. I'll take a 6 minute render but in truth I had fingers crossed for 3 minutes based on some of the results shown here, a 10:1 reduction would have been awesome. 5:1 would be too and I'll gladly live with it!
I do know it has to have the latest version of Studio only. Of course now that's all you can install anyway since I need to install from scratch. Anyway I held off on buying until the new official version of Studio came out with 1000 series support. Supposedly it comes today but I won't be home until the afternoon so if it comes early it's possible that nobody will be there to sign for it. How bummed will I be if it came and left and I have to wait one more day!?
Well 1060 certainly would not have been getting better render times on 4.8 than 4.9 because 4.8 did not support the 1060 at all. 1060 is only supported in the very latest 4.9 release, and a few of the betas prior to that.
Dude, I kind of stated the 1000 series only works in the new release in the previous post right before this one...so I am quite aware of that situation.
The 4.8 remark was a separate observation.
I ran it on my GTX 1060, here's what I had vs what I have:
OLD (GT 750M 2GB) : Total Rendering Time: 30 minutes 29.77 seconds
NEW (GTX 1060 6 GB): Total Rendering Time: 4 minutes 59.49 seconds
outrider42, your prediction of 5 mins for the 6 GB version of the 1060 was way off, 0.51 seconds to be exact. wait, that's pretty much perfect. Well, it wasn't the 3 mins I was lusting over but I'm very happy! What a difference!