Adding to Cart…

Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
I think the 3090 is pretty much the new Titan. I don't think they announced a new Quadro line for Ampere yet. When they do, you can expect something absurd (in price and performance terms) I'm sure. I think the last time the 3rd tier card was better than the previous generation's flagship was in 2004, so yesterday was really quite a step up for graphics technology. Very exciting.
...if that is the case, hope that it still allows for switching to TCC mode to bypass WDDM.
I'm hoping that too; even so, I'm considering one as soon as available.
FWIW I take the power requirements with a large grain of salt - here's what I observe in my system - 980 ti, 1080 ti, I7 5390K 6 core, cpu fan, 3 intake and 3 exhaust fans on the case.
Both GPUs running at 94 to 96% load (Voltage limited, per gpu-z 2.33.0 - max power draw, total system, 583 W (APC Powerchute monitor); 980 ti peak powr draw 221 W, 1080 ti peak draw 213 W, cpu peak at 45 W. Both GPU running between 196 and 210 W with the GPU load varying between 91 and 96% over a 15 minute render. The 980 ti did get thermal throttled near the end of the render (looks like I need to clean the case filters). The fan speed on both GPUs was above 90% for the bulk of the render.
The key number here - 583 W peak, running between 565 and 575 W for most of the render on a UPS rated at 1100VA or 810 W.
And, with an 1100 VA ups my estimated battery time was hovering at 4 minutes.
@nicstt
Sagan generates a Python script to convert a particular dForce hair type, Classic Curly, to a particle system and apply it to the owning character. It currently only works with that particular asset, though.
Iray 2019 which is what is currently in DS doesn't actually have strand support,DS works around this by converting it to very dense geometry
the good news is that iray 2020 has a proper strand primative so if that gets implemented into DS the memory impact of hair will shoot down and should be comparable to cycles
so Nvidia has already potentially implemented the fix for dforce hair
I think it's unknown right now. NVIDIA like their driver restrictions on consumer cards don't they. Look at NVENC. Maybe it'll get booted up to Quadro only.
They specifically said the 3090 is the Titan replacement. There was some discontent from the AIB's about Nvidia selling the top SKU and about all the weirdness of the product stack in recent years. So, they're claiming for now at least, that this is the end of Titan, ti and Super.
Partners leaked a Super and a TI version of each skew already. They'll differentiate the skews based upon the binning presumably. The TI will have more RAM and the Super I guess will have higher clocks. I think they're waiting for AMD's presentation in November before putting them out there.
There we go, the original source is here for the Ti. Lenovo "accidentally" leaked the info, then took it down. Looks good.
LOL, when I first heard about the Lenovo leak, I thought it might just be some random placeholder text in some press release that got sent out early, but it was actually part of a pre-built system they are going to sell, including separate SKUs for the announced 8GB models and the 16GB ones?
I suppose it could be a typo and/or not really knowing the spec but really... is that likely? It's possible. The "we have heard both names in private conversations with AIBs" is telling though. I think they'll release them sometime for sure.
...yeah if that becomes the case, I may just opt for the Turing RTX Titan instead, as if I ever have to go to W10 (like Daz or Iray drops compatibility for older OS's that are no longer supported [I've seen this happen with other software]), at least I'll have all 24GB available for rendering, not 19.7 (and it will still fit my current MB and case so I don't have to scrape up additional funds to also build a new system for it).
Yes, its entirely likely. Lenovo isn't an AIB. They have no direct relationship with Nvidia. They buy cards from AIB's. It was at least some of the AIB's that wanted all that stuff to go away, they think it was affecting their sales due to confusion over which card was "best."
I didn't watch the stream, but I googled around and found no information regarding any next-gen quadro models, so I suppose those weren't really mentioned here, right? I know this was a GeForce focused announcement, but I wasn't sure if there were any hints or information as to what we can expect there.
I know it's clock speed is slower, and they generally cost more, so people probably won't be as interested in those, but if they ever do raise the VRAM limit up past 48 gigs on the high end card it would be cool to see, indeed!
They didn't make any announcements. If there yields on this new node are low, and there reasons to think they are, they might hold back on Quadros for a while or might source Quadros from TSMC or something. All I got from the Nvidia enterprise guys was "No information at this time. Why don't you think about buying the Ampere A100 rack."
...I remember a few years ago when Pascal GPUs were introduced watching a video of an Nivida presentation at a convention (SIGGRAPH?) where future development trends discussed. It showed a chart on a large screen behind him with a timeline that included Pascal, Volta, Turing, and Ampere which also indicated possible VRAM steps ending with 72 GB. Wondering if that was a hint of what may come with regard to the Quadro line. Maybe 16 for the RTX 4000, 32 for the RTX 5000, 48 for the RTX 6000, and 64 (or 72?) for the RTX 8000?
Just a guess based on what I remember from back then.
The important questions for me are-
1. Will a RTX 3080 fit into my case - I've seen some that are an inch shorter than my RTX 2080 so I guess at least some of them will.
2. Will my system handle an RTX 3080? My rig is 4 years old but I have relatively decent specs... 24GB RAM, i7- 4790K, 750w PSU.
3. Is my MB up to scratch? Well I do have plenty of room for a multi-slot card but is that all there is to it?
People talk about bottlenecking. Is that just a gaming issue or does that impact on rendering too?
Just saw this on /nvidia:
Yes, well DAZ & iRay isn't that big a deal so Blender is fine but more importantly from my perspective is the nVidia announcement of their Omniverse product plans which has made up my mind that even if the AMD Big Navi comes and blows the Ampere 3090 out of the water I am buying a Ampere 3070 16GB or 3080 20GB. I fully expect to have to wait until after the new year though to buy either of them due to price gouging.
Yep, Redshift already supports Ampere cards. They received some pre-release units from Nvidia so they could implement support prior to release. They also said that the performance gains are quite big on scenes that make the most of the updated RT cores, and still very good on scenes that do not benefit so much from RT cores. They are hesitating to release full benchmark comparisons yet though, they want to get release/production units before they post full details, which i can understand.
I mean, it's not really gouging. Supply is usually limited at first as they haven't got enough stock to optimise part binning. Some chips are better than others/can run faster clocks than others. Takes a while to get the right number of standard deviations on your graph to be sure you're getting maximum $ for your stash. Wait too long and you customers go elsewhere. It's a delicate balancing act.
Start with one card, see how it goes, and consider adding another later.
I'm planning on turning my 980ti into the monitor card and whatever new card I get will be for rendering; Cycles (as I use Blender for rendering) should be fine for support.
Thanks nicstt. I hadn't thought about keeping my 2080 as a dedicated monitor card and the new one as a renderer.
Again, I wouldn't call the 3090 a new TITAN just yet (it's missing key TITAN elements). You also won't "double the VRAM" with NVLINK. I've yet to see a test case where you get 100% scaling of the mem pool.
I'd love 48GB (as I'd just buy a second TITAN RTX), but I haven't seen that applied yet.
I'm guessing that once the dust settles post-Big Navi release, my 1080 TIs will be worth what a used 1070 goes for now ($200-250). I can't even imagine what the used market will be like for the other Pascal cards. I wonder if it's best to just keep them?
1) The reference cards are tall. Some of the renders, which is all that is available so far, of the AIB cards look pretty tall as well. If the cards are lots taller than previously that may pose problems for some cases. Check the specific dimensions of the card you want versus the dimensions of the card you already own, or get out a tape measure.
2) Your system should mostly be fine. Wait for the reviews on how much power these things really draw. Nvidia says the 3080 draws 320W. Depending on the rest of your system that might be pushing a 750W PSU. But until reviews are out no one is really sure. These power numbers seem way out of line with a process shrink.
3) Your motherboard should be fine.
Yes, bottlenecking is a thing even outside gaming. Some component in a system is always the slowest. Everything else is always constrained by that. The question is by how much and is it worth the effort and cost to change/reduce the bottleneck. In very rough terms a modern GPU will be bottlenecked by your 4790k in a lot of tasks. Will it be in DS/iRay? Someone would have to benchmark it against other systems to see. But if you're still getting acceptable performance out of the system there is no reason or need to upgrade the whole thing because you won't get every last bit of performance out of a new GPU.
Particularly since neither card was announced.
You will double the RAM, if it's supported by the software (like with Blender 2.9 or DAZ support memory pooling at least for Textures, but sadly not Geometry)
Backup cards are never a bad thing. I almost never sell old tech.