Adding to Cart…

Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
And the page doesn't even show 3090 data. I could probably literally replace my 4 2080tis with 2 3090s and get the same performance and 48gigs of VRAM for textures. As soon as I find out what to buy that reliably works with Cycles, I'm pulling the trigger on this.
It was interesting to note, though, that EEVEE performance was better, but not twice as fast.
I understand that once the data is transferred that it makes little difference as the majority of the work is on the card.
... The less often you have the render engine of choice update the display, the more performance you'll have.
I suspect there might be instances where it might be helpful: ultra high sync rates (I think it is). Or say you have to upgrade because need a new card and the intention is to move to a larger screen size perhaps.
... If you're going to get pretty much the same performance as a gamer (you don't render), then why upgrade? Plus if you're getting the same performance it is possible upgrading something else in your system might actually give you more fps.
I was looking at some results tables and the comparrison with 2080/2080ti wasn't done with the data from the original release but with what was available now, and there was a 20% increase in performance between those done for the results here, and those done two years ago.
The performance difference is minimal. In rendering it is very minimal. Data is transfered once not continuously as it is in games so there is next to no performance hit. The same card in a gen 3 system might take a few seconds longer to complete a render than it would in a gen 4 system.
You forgot how often the render engine updates the display with the rendering image - presume it is set to update. It can offer a small/very small but measurable difference.
Maybe? If that has any effect on the CUDA process. I'd want to see a benchmark of a gen 3 and gen 4 system with the same card to see. It shouldn't though. The CUDA process should not be effected at all by the copy. Why it would block while the PCIE bus was transfering data is completely beyond me.
Not really. Several reviewers tested the 3080 with Intel chips which are still limited to pcie 3.0, as well as AMD chips that use pcie 4.0. They found basically no advantage to 4.0 with everything they tested. Also, Nvidia themselves said that pcie 4.0 would only offer 2% at best over 3.0. Now perhaps in the future this will change, and I think some games will make use of it. But that is for gaming...not Daz Iray.
Daz Iray is not dependent on pcie at all. It does not use the bus so much. The data is loaded on to the GPU, and the data stays there for the duration of the render. There is very little data being exchanged over the pcie during a render. This has been tested, too. Running a render in x8 has no impact at all versus x16.
Also, I am not sure why anybody is surprised by the results posted. The Iray group has already showed Iray performance weeks ago, way back on September 2. This is a 3080 versus a Quadro RTX 6000.
And this performance has strongly been hinted for some time before that.
These performance increases are quite remarkable and I was considering a 3080 up until recently... I resigned my job or 22 years last week.
For people like me who never have a problem with scene fitting to VRAM (8GB) and never render longer than 30-40 minutes with the sort of renders I do, the cost/benefit of upgrading is questionable. I can see the benefit for those needing more VRAM and those renderng long hours but for someone like myself considering the 3080 ($1000 + here in Australia), do I really need it? Judging by the DAZ galleries 95% of people only render one figure scenes so I suspect I'm not the Lone Ranger regarding the need for more VRAM. Granted that pumping up quality setting will exponentially increase render times. Yes I want it, but do I really need it?
You won't notice any difference I don't think. At least that's what I read here (scroll down for conclusions, they tested a lot of games and benchmarks). I suppose to put it another way, I don't think there's an application for 4.0 that anyone's using right now. That is, one that saturates 3.0 bandwidth. I imagine some very heavy streaming applications might.
I imagine you frantically clicking at 1 second past midnight when they go on sale. Good luck!
Thanks for your quotes, too many to index here.
for a short conclusion, no need for me to get a expensive card for a minimal performance, (yes I know nobody asked too)
why to buy a $$$ card for a mere seconds of fast rendering being a hobbyist?, for a pro is justificable if you make an income from your renders, in other ways is just plain st...sorry.
Well you're not really looking at a few seconds. You're looking at about half your render time between 2080 and 3080. That being said, if your render times are less than an hour, that 50% saving doesn't mean so much to someome like this compared to someone rendering for 12 hours or more. The cost/benefit of saving 50 percent on a $500,000 house is not the same as 50% on a pack of gum in realistic terms.
The question of how much one spends applies to all hobbies though. Pretty much all hobbies are unnecessary recreation. For 99% of people video games are a hobby, too. The same logic applies there. The 3080 can play games at very high frame rates, but existing cards can play them quite well already. Or what about Daz assets? How many models does one need?
These are questions that everyone has their own answers for.
If you just do a still per scene I agree you should be very hestitant to upgrade but if you render animations in a scene then to not upgrade is really going to hold you back unless you just can't afford it.
say what you will, but I want that VRAM baayybeee so im gonna wait till big navi drops and nvidia pulls out the 3080ti hopefully with 16 or more GB of Vram. I game at 1080p, i'm gonna use pcie3. im a dummy LOL the hype is at an all time high lol
YEP, at first i was concerned about pcie3. watched a vid by blunty, then jays2, then bitwit, and basically they all say there's no difference.
My GTX 1070 works fine and is surprisingly fast on old hardware with PCIe 1.1 x 16. So it doesn't look like PCIe version matters that much, at least not with a single card and nothing else that puts a heavy load on the bus.
im looking for a 16gb one, doubt nvidia would drop a ti with more vram than the Titan, unless they drop the price of the 3090 and release an actual titan labeled gpu
It's fine. The 2080ti doesn't use all the bandwidth of PCIe 3.0.