Adding to Cart…
![](/static/images/logo/daz-logo-main.png)
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
Funny you say that I got a good one full price. Due to rebates & someone at the store offering to purchase my 1080 its onlyc osting me 1000 CAD in total..
My experiences on ebay have been good and bad. I was not satisfied with something lately and would of been able to retun it but at a loss of shipping & import fees which I thought was lame. This was from a reputable seller.
hold up. So NvLink will work in Daz now? im confused lol
Last update supposedly made it available, don't got the kinda scratch to test it out for myself lol
Well, EVGA has a 3 year transferrable warranty. I always ask how much is left on it. I had an Ebay card fail a couple of months ago. Evga gave me a working one. The best is if it was never registered at EVGA. Then you get 3 years from the point of purchase. I figure if I can get one of 1000, and sell wither one of my 1080 Ti's or my 980 Ti, it shouldn't be too bad.
It depends on the card if you're talking about the sharing of VRAM function. Currently, the only cards that will share VRAM is the Titan RTX. Of course I would think that means the 3080Ti, when it comes out, might just be able to do that.
Update? What update?
The last daz studio update I meant.
Ah. Well, since none of my cards have memory scaling capability, so not a big deal for me at the moment.
Did he explain why? Undervolting a lot is a sure way to get into trouble but dropping a little has often been shown to not affect the card's performance at all while lowering temps.
I started with a 980ti but couldn't afford the jump from 6 to 12 so only got the 6
but last year I started looking at the 2080s (was to broke to grab a 1080 which may have been a good thing when they were knew)
so I was running around comparing cards, cudas etc etc and shopping
---
And hit a titan X on ebay for $440... not as fast as the 2080s but still high cuda count
One of the earliest renders hit 11800 on the card so the 12g instead of 11 saved my butt on that one.
---
my renders usually take about an hour
but very seldom do I have issues..
I do have both the 980ti and the titanx in the case .. going to take a riser to get another one in
the issue is not the case which is a big thermal take server case ($20 at the reuse store)
might have two fans but yep the side is open but it's mounted sideways behind the monitor
(during an extended power outage with no computer on the ambient room temp did drop ten degrees)
---
as for the 100% usage on each card..
even when I have a scene that should only fit on the big card the render engine uses the smaller card
I've seen them chug along with 10g on the big one and 5 g on the small one.
---
I do have 64 gig of ddr4 and dual xeons, I keep battle encoder sitting on daz to make sure it can never use more than 66% of the cpu so I go on and work while rendering.
---
Including working a second daz scene which I never seen to hear people comment on...
I've never trying to render two scenes at the same time though curiosity almost impels trying it
but at times I have 4 daz scenes up... since you can't cut and paste I'll work on something export as a subset and then load into another one...
---
the attached render was the 47min at 11888 usage on card. only 2kx1k.. but the 4kx2k seem to only be 50% longer.
outside of the hdr sky the light is just the lights in the scene. although the entire set is much larger and I should probably turn off the ones out of the picture
---
the second is the complete set ...
---
but I am looking forward to getting another card in.. and seeing how three work
but unless you get the fancy package from nvidia .. the cudas don't stack, the ram doesn't stack so your gain is basically just in saving time rather that being able to do larger projects?
He did not go into detail.
The CUDA's all stack. The VRAM doesn't. Yes, certainly the main issue with multi card setups is in saving time. I make comics, and I like to do between 15-20 frames/day. With some of my scenes, a single card wouldn't be able to get it done. Maybe a single 2080 Ti could, but my current set up is faster than a single 2080 Ti.
I'm glad I was able to help Areg ... It's a complex issue when running multiple cards since the variables are numerous, as you now know.
In my workstations I run things a bit differently than you. With the Titan RTX's it's important to run the cards not connected to a monitor in TCC mode, in order to have the VRAM scale. So in my DAZ/Iray machine I run two Titans in TCC and the other in WDDM mode to drive my Dell 8K monitor. In my Iray Server workstation I use the built in VGA output for a simple monitor and have all three Titans set to TCC for rendering. So for authoring in DAZ and viewport I have two Titans @ 48Gb of VRAM scaled, and for rendering in Iray Server I can have up to 72Gb for larger scenes. Now of course you need the NVLink bridge and nVidia does not offer or support a three-way NVLink bridge. However, a few friends at nVidia where kind enough to build me one as a favor for my Iray Server machine. For the rest of you, if you do get a pair of Titan RTX's I can tell you the speeds are lightning fast when running in TCC mode, using a 2-way NVLink bridge and at 48Gb of available VRAM. You can run any other card in WDDM mode to drive your monitor.
The rub here is when running both machines on the same image, I am limited to the smaller 48Gb dataset since scaling does not happen across a network. However, the speed difference is significant when running 5 Titan RTX's simultaneously via Iray Server.
As we discussed on Tom's, VRAM scaling is only available on Titan RTX and RTX 6000 & 8000 and only when the cards are in TCC mode. There are work-arounds for GeForce cards but they are hacks and have proven to be unreliable, especially the GPU BiOS hacks.
A sidenote: With the 88 threads and 512Gb of system ram in my workstation, the Iray fallback CPU rendering speeds are about equivelant to three Titan Xp's. But I have that much CPU processing power for when I am using Maya and V-Ray, since GPU rendering in V-Ray has some pretty nasty artifacts in the rendered image (s), which show up as noise in animations.
Help is an understatement! I have been struggling to get my computer to render without a hitch using three cards ever since I had 3 cards. You had my solution. And I'll tell you what: I hope you don't mind but whatever build is in your signiture is going to be my next build. You were not the first to suggest I avoid consumer grade stuff for my business (such as it is), but you will be the last. My next build will definitely be a dual Xeon server. For now though, I'm good. I'm closing in on a 2080Ti. Looking like I can get one for about a grand. I get that, sell my 980 Ti on Ebay to offset the price a little bit (still works perfectly), pull one of the 1080's from my Thermaltake build and put that in the Coolermaster. Then I'll gradually replace the other two cards in the Thermaltake. If I sell those cards that will further offset the cost.
You are a God. Thank you for all of your help!
My next build will definitely be a dual Xeon server.
---
I made one mistake when I got mine... I just went to new eggs and looked at the server boards.. and I did get dual xeon 16 memory slot .. could have done better in looking at pci layout
but it was a straight server board.. instead of the work station version of the board... didn't have any audio... so had to get a card for that.
----
with two video cards only one little slot is available and it has a usb3 card.. although the sound card usb works fine.
----
four years ago when I built mine I had X dollars to spend .. so I was going to start with one 2630 v3 at $850.. bucks .. but just on some kind of wild thought I checked ebay
and found a pair of them for $350 each..
Did some thinking at that point I had a couple computers with i7 920s which had come used and never given me a bit of problem over 4 years or so
So I figured .. two used ones.. even if one fails in a year I've still spent less and so I bought them ... so it's four years later... and they're both still chugging along.----
---
Is the 980 a 12 or 6 gig card and how much?
980 Ti's are 6 gig. I would think I might be able to get 250-300 for it on Ebay.
There is a lot of server HW entering the used market due to Epyc. If you guys want that stuff look now, although tbh most of that stuff, except the very highest end stuff, is pretty awful. New desktop parts just crush Xeons from 2 or 3 generations ago.
I'm a ways away from another build. But I would plan on not skimping on the MB or processors if I went server build.
I'm not using server hardware, I'm using workstations. It's not just a matter of performance. And the stuff is not awful, Ken, even by comparision. Eypc is indeed awesome and so are some of the newer consumer CPU products. However, the consumer CPU's and motherboards are not designed or binned with long term reliability in mind. And most comsumer MOBO's don't support the full capbilities of the CPU's they can mount. In other words they are not designed to run 24/7 at 100% duty cycle and lack major features, and are not scalable. Run Threadripper under those conditions and it's bye-bye CPU in few months. On my system, which is last gen (v4) E5 Xeon and my graphics cards (Titan RTX) I can render in Iray an image at 4K UHD (2160) resolution out to 7500 iterations in a few minutes. A little less than half that time if I also use my Iray Server workstation. I wouldn't call that "awful" ... more like awesome! Now I bought my workstations used from large refurbishers on eBay for about $2400USD, then made upgrades. From eBay they came with dual 22-core (88 total threads) Xeon factory water-cooled, 126Gb of RAM, 1Tb SSD, three 3Tb HDD's, Win 10 Pro installed (with creds) and even an old Quadro card. The performance per dollar is high, and the scalability is much more than any consumer products, which can't come even close.
Bear in mind the real power with DAZ/Iray is in the graphics cards. Which is why I run so many Titan RTX, and the same in my Iray Server workstation. The CPU and system RAM is much less consequential. However, this is not the case with Maya and V-Ray which rely on the shear horsepower of the system; CPU and system RAM. Since I use both, I made the investment into Titan RTX in a big way, and do not regret it for a moment. Now what is probably sinful for most of you, in my DAZ machine I have one of the three Titan RTX set to WDDM and only being used to drive my monitor. It's a Dell 8K monitor and I do exclusively use Iray in my viewport. I don't spot render, I see the entire image rendered in real-time, albeit at a lower iterations.
Not to mention a workstation has extreme cooling solutions factory built in (mine has water cooling for each CPU). It has 12 fans divided across three physically separated cooling "zones"; 1 for each CPU cooler, one for each bank of DIMMS, and four in the case. The 1450W PSU has two of it's own. Workstations also accomodate scaling that consumer products simply cannot come close to. The entire machine can be completely disassembled without tools, including PCIe cards, drives, PSU, etc. (The only exception being the CPU clamps). I have 512GB of system RAM on eight channels, and that's only 1/2 the maximum capacity. I also have 80 PCI lanes, not including an extra 20 set aside for the chipset. There is plenty of headroom on the PCI for graphics cards, NVMIe cards and others without saturating the PCI lanes. Most consumer boards while they may be used with CPU's that have many lanes available, the boards do not support much more than 30 lanes, including the chipset and DMI.
I've been down the consumer route and tried just about eveything, till I decided to finally go with what we use at work ... HP Workstations. I will never go back.
On another note, undervoltage on a GPU will not signifcantly decrease performance, however it can lead to instability since a certain amount of voltage is required for a given clock rate and needs to be consistant. Modifiying such things are acceptable when doing rasterization (object order) rendering, such as with gaming. However, once you get into CUDA compute processes (image order rendering) the situation becomes very different. It is essential that clocks and voltages are stable, so messing around with this is trouble waiting to happen. Or at least you risk having renders stop in mid work.
And yes ... NVLink and memory scaling does work with Iray (not DAZ per se) if the cards capable of using those are set to TCC mode and not driving a monitor.
since iray works well with the video cards.. No need to worry about uber computer power.
---
I've got dual 2630 v3 and I usually have photoshop, 3dmax, a couple daz scenes open, 20 fb page, the music program playing ... and can use the computer without issue even while rendering...
If anything the issue is only having 64 gigs of ram because I do have some scene files that turn daz in to 15g of ram...and two of them open with everything else can be a bit much
---
last time I checked going up to the 2690 series wasn't that much but even at I think the 500 for two.. I looked at the performance lift and unless I were to start rendering with the cpu couldn't see any need for it.
----
Daz doesn't use the cpus at all .. only time the cpus get used is when the render falls off the card...
Wow, that's very nice hardware! :D
For a high earning business that's great!
For me, a Ryzen 9 with 2x3080 and 64GB of RAM is the most I can even dream about at the moment :)
(I'm currently using a Ryzen 5 1600, 16GB of 2400MHz RAM, RTX 2060)
Very nice indeed. I'm a few years away from my next build, but when I build it I'm using whatever he has as a guide. I just pulled the trigger on two 2080 Ti's that I got on Ebay for a grand each: a new Black edition and a used SC Hybrid, never registered with EVGA. So after careful consideration, those will be going into the Thermaltake case. I may keep one of the 1080 Ti's in there as well. More CUDA is better after all. The other 2 go into the Coolermaster, and I sell the 980. Then both rigs can be used to render. Actually, the Thermaltake is running so well now that I have all of the issues sorted out, that by the time I have the next scene set up on the Coolermaster, the rendering is all done. If I could just come up with a way to pose the scenes faster I'll be all set.
Congratulations on the new GPUs! :D
It will be interesting to see if that extra 1080 Ti improves the render speed, or if it slows it down!
I wonder how much it costs to make a server build. The Titan X alone costs double my current not only PC, but peripherals as well.
I already know it won't slow it down. The architecture is not important. The number of CUDA cores is what determines rendering speed for the most part. I have pretty extensive experience running cards of different generations.I've mixed Keppler /Maxwell, Maxwell/Paschal, Keppler/Maxwell/Paschal. No problems at all and it speeds up renders. If it was slower I'd be shocked.
Good, let me know! :D
If it doesn't slow down, I max add an Ampere GPU to my RTX 2070 Super.
Will do.![smiley smiley](https://www.daz3d.com/forums/plugins/ckeditor/js/ckeditor/plugins/smiley/images/regular_smile.png)