Adding to Cart…

Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
On the comparison page, only the 3-slot RTX-3090 is shown to have the NVIDIA NVLink™ (SLI-Ready) option. Not the 2-slot 3080 or 3070.
Source: https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/compare/
I'm trying to figure out if I can even fit the 3090 in my case with my 1080Ti-I don't think I can fit 2 3090s, lol.
Why would you need to? The 3090 is so far ahead of the 1080 Ti... Here's the thing with dual setups. You have to be very careful with thermals. Your temperatures go up, your clocks go down. Putting two cards in one machine (I'm talking air cooling here of course) will seriously hurt the airflow around both cards and that will give you worse performance overall, including the CPU which is sitting in that hotter environment too. It may be better overall but it's never 2 x, so you have to ask the question if the value proposition of that second card is worth it.
Has there been any indication of whether the 3090 can be put into TCC mode?
I am disappointed but not surprised that the 3070 is stuck with the same 8GB of VRAM that I have in my GTX 1070. VRAM is *the* limiting factor of IRay and NVidia continue to ignore this or push us towards the expensive end which is beyond the spending capacity of many hobbyists such as myself. I'm guessing that Optix and some of the other fancy features in the new cards will require even more VRAM so my scenes will soon be down to a couple of characters with texture sizes reduced beyond the point where the seams start to show. High hopes dashed for me but I can see the thrill of those with cash to burn.
I don't think Nvidia has Iray or equivalent renderers in mind when designing the 3070. The 3070 is a perfectly balanced card for gaming, able to run 4k at 60 fps easily and will end up being the best selling card from the lineup. 8GB is good enough for 4k right now, though certainly could be better (and probably will after AMD launches their cards in turn).
I would still wait. I'm pretty sure they'll release another set of cards with more RAM. They'll probably wait until AMD release RDNA2 to announce them officially. So I'm going to hold off on buying for a while I think.
Some of the rumours mentioned that they expect the double-sized VRAM to be released in Spring next year (another 6 months to wait) and then only if they are pushed into producing those variants by whatever ATI release in the meantime. Again, my hopes have a habit of biting the dust so I'm not raising them yet. My son was looking for a GPU upgrade and offered to buy my 1070 if I upgrade to a 3070 but that looks unlikely now too.
Six months isn't long to wait though, in the grand scheme of things. I'm not buying two cards this release cycle that's for sure, so waiting it is.
With a pair of 2080 Ti's, y\u are sitting pretty and have some time and options. I am stuck with a single GTX 1070 and have to do somehting because my 1070 is now two generations old and need more memory and cuda cores to do the scenes I want to do.
I first want to see some iray perfomance comparisons. For now, I'm actually pretty satisfied with my 2070 anyway, though some additional memory would be nice. So I'll just see what the iray benchmarks are, and hope for a 3070 variant with somewhere between 10 and 16 gigs in six months. A 3070Ti or 3070 Super could be an interesting improvement, both in performance and memory.
It isn't fast enouigh. PCIE gen 5 is. But that is a couple of years off. The motherboards that had NVLink built in were custom ones for the DG computers that Nvidia sold.
A pair of 2080's or 2070 Super's, the slowest NVLink commonly available, can transfer 25Gb/s each way. A 16x PCIE gen 4 connection would max out at 31.5 Gb/2 one way. So it's over half the speed of the slowest connection.
And yes there is a slide that does say the 3080 and 3070 does not have NVLink.
I'm rather curious, why the rush to get these newer cards? Unless I missed something, these cards probably won't work with Daz iray rendering until they update the software, which could take a few months or more, no?
Speaking for myself, my bar for buying an RTX 3090 was "will it cut my render times in iRay to under 33% of where I am now with my EVGA RTX 2080 black (non-oc)?"
Glancing at the specs, it looks like it clears the threshold...
RTX 2080 has 2944 cuda cores
RTX 3090 has 10496 cuda cores
I'm going to wait to purchase until I see some actual real world benchmarks, but right now this is looking VERY promising and the productivity and quality of life improvements this will give me is more than worth the price of admission.
I'll use my current TITAN RTX and let ya know
There's some confusion over number of "cores" here. I think it's actually half that on the 3090 but each one does two instructions per clock. That was one part of the presentation that wasn't clear. Anyway Digital Foundry actually got a hands-on look (one of the rare few). In games at least, RT looks about twice as fast. Rasterising around that too. I don't know how it scales with resolution. We'll have to wait for actual numbers from anyone who can get their hands on one once they hit shelves.
Another point here, I think AMD's new generation of cards is going to drop around November time. We can probably expect another round of NVIDIA cards first quarter next year then, or maybe earlier, with more memory. Or maybe that's wishful thinking on my part. The competition is good though.
I'm still saving for one, but might grab a 3090 for monitors and help rendering as well; wonder what AMD will do.
Whatever happens, my 970 I use for the monitors has a faulty port and does seem to be less capable than it once was.
Wait... Save cash.
Not wait... Spend cash.
... The choice is yours.
Not if one is only for monitors, or only very limited. I've been using a 970/980ti for about five years now and have used the 970 with the 980 about five times - if that.
Need?
Nah, lots of us manage with less. Sure it's nice to have - or do you use it for work?
EVGA just sent me an email-looks like they will have *five* 3090 models coming out, three 3080 models, and two 3070 models, at least to start. Scared to find out what the Kingpin version will cost.
https://www.evga.com/articles/01434/evga-geforce-rtx-30-series/
Looks like at least a couple of the models will still use 8 pin power connectors.
MSI will have four 3090 models
https://videocardz.com/press-release/msi-announces-geforce-rtx-3090-rtx-3080-and-rtx-3070-graphics-cards
ASUS has two 3090 models so far
https://www.tweaktown.com/news/74882/asus-intros-geforce-rtx-30-series-rog-strix-tuf-gaming-graphics-cards/index.html
Yeah. There is some real weirdness about this CUDA stuff. I sent an email to my Nvidia rep and got back nothing but the timestamp to the presentation and more will come next week. So not just wait for game data but wait for rendering performance data. While we won't likely see iRay data till people around here get the cards the early benchmarks on Blender and Vray should give us some idea how this new CUDA performs on rendering tasks.
Although I'd be really shocked if it did badly. Nvidia makes a lot of money off the enterprise and CUDA.
Looking atthe options out there, Your best bang for the buck might be to see if you can snag a matched pair of 2070 Super's and a NVLink bridge. At anything less than full price that is going to be pretty cheap compared to the 3090. Even at full price you're saving $400+ but done 8Gb.
wishful thinking maybe, but is it possible that the RTX 3090 has 10,496 cuda cores AND does two calculations per clock?
I would think that if they're describing 10,496 cuda cores without an asterisk, they would be describing a tangible feature of the PCB.
Problem with that plan is that it looks like only the 3090 is going to have NVlink. I guess Nvidia is making the multi-card setup a premium feature from now on?
I find it strange that it has more than twice the CUDA cores of a Titan RTX, the same amount of VRAM, and yet it is around half the price. Any plausible explanation for this?
It's my understanding that the Titan cards were always case studies of immense diminishing returns.
Also, Titan RTX is Turing 12nm, while Ampere is 8nm, so maybe it's just cheaper for them to pack on the cores because they're physically smaller now?
That has no impact on the 2070 Super.
Big Navi can't launch soon enoguh. Nvidia is holding back on the 700-1500 gap to counter AMD. It's either going to be a 12GB 3090 (3080 Ti?) or 20GB 3080. Both can launch right away if Nvidia chooses. If Big Navi is compeitive with 3080 but comes with 16GB then it's probably 20GB 3080 displacing the 10GB version at a reasonable premium. In that case I am happy to get that. If Big Navi is between 3080 and 3070 then we likely will see the $1000 12GB 3080Ti instead. Then it's the eyewatering 3090 or bust. There is also hope that Big Navi can challenge 3090....
No. They've had to own up and admit the physical number of CUDA is half the number on the slides. Otherwise those wouldn't just be generational increases but leaps. Plus they have stated hom many SM are on each and how many CUDA per SM and it works out to half.
A lot of this presentation was if not outright misleading it was shaded that way. They left a lot out and glossed over a lot else. They spent a lot of time hammering that the 1080ti owners should upgrade without really making a compelling case, as a 1080ti owner.