Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
I looked up which 3090 is now being sold for $1699, it is the FTW3 ULTRA
This is a list of launch day pricing from Newegg back in September 2020:
Asus ROG Strix GeForce RTX 3090 OC EditionROG-STRIX-RTX3090-O24G-GAMING$1,799.99
EVGA GeForce RTX 3090 FTW3 Ultra Gaming24G-P5-3987-KR$1,799.99
EVGA GeForce RTX 3090 FTW3 Gaming24G-P5-3985-KR$1,729.99
EVGA GeForce RTX 3090 XC3 Ultra Gaming24G-P5-3975-KR$1,619.99
Asus TUF Gaming GeForce RTX 3090 OC EditionTUF-RTX3090-O24G-GAMING$1,599.99
EVGA GeForce RTX 3090 XC3 Gaming24G-P5-3973-KR$1,549.99
EVGA GeForce RTX 3090 XC3 Black Gaming24G-P5-3971-KR$1,529.99
Asus TUF Gaming GeForce RTX 3090TUF-RTX3090-24G-GAMING$1,499.99
Because the actual SKU is listed, I was able to double check, the FTW3 Ultra is indeed $1699 right now, which is $100 BELOW the original MSRP of the card. That is no doubt a true first, you can buy a 3090 below its MSRP at retail.
I do think things will continue this way for a while. I am a bit hesitant because crypto looked like it was rebounding, and even worse, Ether postponed the Proof of Stake shift by a few months. (Not really a surprise, they have been delaying it for years.) However the profitability of mining is NOT getting better for miners in spite of Ether's value going up. I think I read that a 3090 can only make like $4 a day right now. Then crypto had a crash, again, just a few days ago. So miners are not going to buy new cards just yet, it makes no sense to do so. It takes too long to make the money back, and Proof of Stake is still slated to happen soon. Rather most miners are just sticking with the hardware they have and mining for as long as they can. At this point, it looks to me that the only way this changes is if some new or different coin explodes in value. Right now I don't see one, but the market can shift very fast and you just never know. My gut feeling is that a new coin will explode after Ether goes proof of stake, whenever that is, because the miners are not going to simply stop mining. Something has to take its place. In this regard, the delay on proof of stake may actually be a good thing, since it might delay this new coin from blowing up the GPU market.
finally! but I'm still waiting to see what rtx 4000s series will be this deep into the rtx 3000 series lifecycle...
@nonesuch00
Ditto.
I've wanted the 3090... but heck, I've waited this long, so what is a few more months? The pricing on the 4090 (estimated) looks good enough to wait anyway.
Laptop user so please forgive my desktop newbie-ness here. If the RTX3060 12gb is 1/4 the price of the RTX3090, why not stack 2 of them for half the cost?
3060s can't share VRAM, so you're not really getting 24GB, you're getting 12GB with the rendering distributed across two cards. Also, even two 3060s combined have significantly fewer CUDA cores than a 3090. Two 3060s would be faster than a single 3060, but nowhere near a 3090.
My current machine is an older one, and the system ram is already at the maximum of 32megs that this motherboard can support. I was waiting for that rumored 16gig 3070 card that got scrapped at the last minute, so now I'm looking at the 12gig variants of the 3060 card. I was of the understanding that one should have double the system ram compared to what the vram was, so 16gigs (of that now-never-to-be-made card) would have been about right... but now I see lots of talk of needing 3 to 4 times the system ram wrt what you have in vram.
I am on a 1060 card right now, 6gigs of vram. I have a handful of scenes I made that it turned out had too much stuff in it to render on my current settup... including one that brought the entire machine to a crawl, with none of the GUIs of any of my open programs responding in remotely close to a timely basis, and Daz not responsing to GUI at all, and looking completely locked up, simply because I tried to go into iray mode in viewport, simply to see how the HDRI looked in the scene, and I literally had to turn the dang machine OFF to get out of that. 0O That one was a case of too many background characters, and stuff. (I was trying to do a large street scene, with several cars on the road, each with drivers and stuff in them, behind the main action of the scene.) I'm HOPING that one will at least render at 12gigs vram, without bringing my machine to its knees, if I get a 3060 card.
Most of my scenes render fine, though I'm running into the problem of several of my projects wanting to take 2 hours to render a given scene.... which is a problem when you're trying to do comicbook-narrative stuff, and want to do 50 to 100 frames of story at that level of scene-complexity and picture-resolution, so I was hoping for a major speedup when going from a 10-series card to a 30-series card.
I'm (probably) not waiting for a 40-series card before updating, because I'm getting exasperated at these 2-hour render times... and have several projects that have been on hold for over a YEAR because of the limits of my current settup, and am tired of waiting to finally get around to doing those. I was figuring I'd get a 3060 for my current machine, then a year or more from now, buy a whole new machine with a 40-series card in it, and whatever the next generation of AMD motherboars and CPUs are promising... some point after they've gotten whatever bugs and kinks out of the system for those.
I had heard it said that a 20-series card was rather faster at renders than my 10-series card, and that a 30-series card was rather faster than the similar-ranged 20-series before it, so I was expecting a larger jump in performance going directly from a 1060 to a 3060, i.e. 10 or 20 times faster... but somewhere here someone seemed to be saying that a 3060 would be only 4 or 5 times faster than my 1060 card is now.
How much faster WOULD a 3060 card with 12gigs of vran be in my current, 32gigs system ram machine, vs my current 1060 with 6gigs of vram?
3 or 4 times is a thing that is sometimes said here by some people who generally have well-founded opinions. I was on board with it because their reasoning makes sense, but as you can read in the thread, the idea has been contested by folks running double and doing fine that way. Looking further, I found many articles about system specs w/respect to video cards and RAM say double is fine. In light of all that, I think a 3060 on 32GB RAM system should be fine. I'd like to tell you, "I have 32GB and that is all you need," but the prices were such that 64GB made sense to me and that's what I bought. So, someone else will have to tell you that.
That is what the Iray Benchmark thread is for. We have lots of benchmarks from a variety of scenes in it. https://www.daz3d.com/forums/discussion/341041/daz-studio-iray-rendering-hardware-benchmarking/p1
I don't think the 1060 has been benchmarked in a long time. But you can do this yourself and get an exact time. The instructions are all in the first post. Download the scene save, load it up and render.
Then you can compare other benchmarks to yours, and get an idea of what to expect from any new GPU that has been tested. Do pay attention to what version of DS was used, as this can effect render speeds some.
But in general, I believe the 1060 gets roughly 2 iterations per second in this test. The 3060 gets 8, so it is about 4 times faster. This may not scale across every scene, but interestingly the RTX cards actually perform even better when scenes get more complex. Meaning the gap grows. So your super complex scenes will render much faster than what you are getting. But generally, I would expect your 2 hour render to be cut to around 30 minutes.
VRAM is certainly a concern. 12GB is great, but if you really do have a lot of stuff going on you may still run out. The good news is that you have more wiggle room to play with and potentially some optimizing can get you under 12 if need be. With 6GB, you would have no hope of optimizing big scenes without making a sacrifice to quality.
As for 32GB of RAM, that just depends. Every use case is different, and this may be enough. But it may not. There is just no way to know for sure until you hit the render button. For me personally 32GB was enough most of the time when I had a 1080ti with 11GB of VRAM. But I did run out a couple times and used over 50GB of RAM in a scene that still fit in my 11GB VRAM. So I can testify that you can run out in some cases. Again, optimization can help you squeeze the most out of what you have.
As for next generation, I doubt the 4060 will release near launch anyway. We might be a whole year away from a 4060 class product release. Nvidia historically releases the top 2 or 3 cards first, and then the x60 models follow several months later. So we may not even see a 4060 until 2023. I can totally understand not wanting to wait that long, especially if you make money doing this stuff. $400 for a potential 4X speed up sounds like a good investment to me. You can get more work done and save up for better hardware in 2023. You could maybe even use a 4060 and a 3060 at the same time and see a huge speed boost. That will certainly more than double the speed that the 3060 by itself does. I run a 3090 plus a 3060. So far I rarely run out of VRAM on the 3060, so both cards run together most of the time. Running two decent cards together is a very solid move for rendering.
..
Thank you. I might poke that benchmarking scene with a stick later. A render of 30 minutes, vs 2 hours, I can live with.... though I had been optimistically hoping it would be closer to 10 minutes... or something.
Oh well, time will tell, I guess.
That is just an estimate, I don't even have a proper 1060 benchmark to compare to. I am basing all of it on the assumption the 1060 hits 2 iterations per second in this bench.
But like I said, the RTX cards actually expand the gap as scenes get more complex. The bench scene is NOT complex. So there is a possibility it goes beyond 4X. However, the 3090 hits about 19 iterations per second in this same bench. So a 3090 probably could hit that time you hope for, since that is over a 9X increase.
And I just have to mention again that the 4070 is rumored (very important to remember it is a rumor, though several sources are saying this) to perform faster than a 3090 does, along with 16GB of VRAM. The 4070 is more likely to launch when the next generation does. The x70 model historically launches with the x80, or just a couple weeks after. So that would come out much sooner than a 4060. Still, we are talking several months away around September.
When I say more complicated, I specifically mean geometry. Geometry is where RTX shines the most. The more geometry you throw at them, the faster they go compared to GTX cards without ray tracing. There was another bench thread with strand hair. This strand hair simulated complex geometry. Nobody uses this bench anymore, but the first RTX cards had absolutely insane numbers compared to GTX cards.
https://www.daz3d.com/forums/discussion/344451/rtx-benchmark-thread-show-me-the-power/p1
There are some 3000 series cards benched in it. Just look at the times compared to GTX cards, and the GTX cards are faster than yours. I did see a 1050 which scored about 1 iteration per second.
The 3060 scored 18 iterations per second. So 18 times faster. The 3090 is there too. It got......45 iterations per second. Yep, that is 45 times faster than the 1050. I would assume the 1060 would get more than 1, it might get 2 per second.
So with that bench you can see the higher end of the potential that RTX offers. It is a special scene that you probably will never replicate, but the data can still be useful, and demonstrates just how large the performance gap between RTX and non RTX really can be.
With the default compression settings, I would say 3-4 times, but if you set the medium and high settings high enough, you will use more VRAM without using more RAM
Then there's the case of people not understanding how much RAM they have and/or if they are using the pagefile on the disk...
So what happens if Studio runs out of system ram or vram? Does it just fall over or can it page to disk? Does this only happen in the initial stages of a render or can it stop part way through one?
Yesterday I hit 97% system ram and 98% vram on a render. Maybe I got really, really lucky but I suspect that it was using the pagefile and possibly changing the compression ratios of the textures so that they fit into vram. Is it able to do this?
As far as VRAM Goes... I read somewhere, rule-of-thumb-ish was for G8, 2-3 GB per character +Scene. So with a GTX 6GB card (which is what I have) Maybe two G8s and a Scene... maybe, before it kicks to CPU.
I know its a typlical 'how long is a piece of string" kind of thing, but for ballpark figuring...
A 3060 might get you 4 G8s and A Scene before kicking to CPU.
Does that sound about right or?
If DS runs out of VRAM, it will drop to CPU, or stops rendering if fallback to CPU is not enabled.
If DS runs out of RAM, it starts using the pagefile (on the disk) and the computer becomes sluggish until there is not enough pagefile either, at which point the computer most likely crashes
...when I was rendering in Iray on the CPU with only 12 GB of system memory (actually 11 after Windows and system processes) It would frequently dump to the virtual RAM partition on the HDD. and yes, that was very, very slow. I dedicated more space to the partition so at least the programme didn't crash.
Really disappointed my MB's BIOS does not support the 3060 I was able to get (even with the last BIOS update that was available), so the card is just sitting in the box waiting until I can get all he parts to upgrade the system for W11.
Hopefully you have the power supply capable of handling a 4090 when they come, as it seems that the 4090 will have a 600 watt TDP..
I have a RTX 3060, and I have able to get upto 10 Genesis 8 figures in one scene, with hair, clothing and so on plus Urban Sprawl 2 for the background and ground fog.. And my card was nudging around 10 to 11 gig vram being used.. Although in getting that many figures in one scene, I did drop the skin texture resolution on all the figures from 4096 to 2048..
t seems like IRAY in DS can be more greedy for system resources than Blender and other programs.
Long craved, functional, high tech equipment "sitting in a box" is a most depressing situation. As are unused motherboard ports and insufficient storage.
Thanks for the info, I'm guessing I was using the pagefile but it's on an SN750 nvme drive so the performance hit wasn't catastrophic.
That's an interesting question... how DO I make sure my current motherboard won't say "Nope!" to the 3060 card? 0o
MSI 970A-G46 mobo
AMD FX-8320 eight-core processor 3.5 GHz CPU
Win10 Pro 64bit. 32gigs of system RAM
Can I borrow it from you until you can get your parts sorted out?
I'm going to disagree with the reply that called this useless.
I don't know about that specific GPU, but I kept my old GTX 1650 to use alongside my 3060, because it means I can have the monitor plugged into that, and have Windows/web browsing/etc using the 1650's processor/VRAM and letting the 3060 just get on with the render.
No.
These two scenes, each with at least ten figures, needed me to optimise assets in the background:
https://www.daz3d.com/gallery/user/5700486984368128#image=1195434
https://www.daz3d.com/gallery/user/5700486984368128#image=1163042
These, all with at least five figures (and a 3D background, some of them extremely non-trivial) did not:
https://www.daz3d.com/gallery/user/5700486984368128#image=1215805
https://www.daz3d.com/gallery/user/5700486984368128#image=1208537
https://www.daz3d.com/gallery/user/5700486984368128#image=1188789
Obviously, it depends on the assets, but at least with the set up I'm using (where, as stated above, I've got a 1650 freeing up the VRAM from running the monitor), a 3060 can probably handle at least six figures most of the time, and past that, you can probably afford to optimise figures in the background.
I think the issue with GPU support is related to the change over to UEFI firmware on motherboards. Really old boards that predate this change can have spotty support. You can try looking up your board and see if there is a BIOS update for it that pertains to recent GPUs. Sometimes they work. However there is a possibility that it may not work. Just make sure you are able to return your GPU if by chance it doesn't work.
Also, another thing that might be overlooked here. Monitor support could be an issue too. If your PC is that old, then perhaps your monitor is, too. The 3060 only has Displayport and HDMI. If your monitor is really old, you might need an adapter to plug the thing up.
As for multiGPU, I would probably avoid trying to combine a really old GPU with a new one for rendering. You can certainly have them both installed, and use the old card to drive the monitor. But using both cards to render might cause crashes. I had a lot of problems trying to render with a 3090+1080ti combo. It might work, but it might crash. When I got the 3060 to replace the 1080ti, the 3090+3060 combo has been rock solid, only 2 crashes I can recall since October, and I think they were dforce related anyway.
The newer version of Iray clearly favor RTX cards, too. I am not just talking performance. Stability of Daz Studio in general tanked for me personally (and others) after the RTX updates. GTX cards have to emulate what the RTX do because they lack the ray tracing cores. I think this can cause problems. Nearly all of my Daz crashes happened after starting a render. I feel that is pretty strong indicator that something was just not quite right in the transfer of the scene to the GTX GPUs that could potentially crash the app. Upgrading to Ampere ended those crashes almost entirely. I also sometimes use the 3060 by itself, too, for testing, and it never crashes. I am not saying Daz crashed all the time, but it was certainly frequent enough to be annoying and disruptive to my work flow.
If it is helpful to anyone:
EVGA has the 12G-P5-3657-KR for 389.99 available to add to the cart and check out. Also RetailMeNot has 10% off codes that currently work. Just be sure to put the code in the associate box, not the coupon box.
Ok....
I am pretty sure you can get 'more' into a scene if you shared display duties with an additional card and optimise the assets in the scene... that is not what I meant.
If you pick the first 4 G8 Characters you run accross if ou go to the store homepage, the first "scene" you come to on that same page, and the first 4 clothing sets for those characters...
Put them all into a scene without optimization etc... could you render it with just a lone 3060?. If there is ONLY a 3060 in the machine doing the rendering and running the display etc?
I guess I needed to be clearer about what I was asking/stating.... so I'll take the hit on that one, but I kind of feel like its is one of those situations where I'd say:
Me: "a 2002 Toyota Camry can't go 165 mph".
You: (straping a rocket on top of a 2002 Camry) "Ha Ha! 205 Mph!"
ME: "I meant a stock 2002 Camry, on a flat road in normal driving conditions".
Yeah. I can get a fully functional G8F with clothing and hair down to 12MB VRAM usage, but that is me doing optimizing and/or knowing what to load and what not to load.
If one just chooses 'random' characters, hair, environment and props that have been released within the last ~6mo, the resulting VRAM usage can be anything between 3GB to 50GB
...my current Daz system:
ASUS P6T X58 MB with socket LGA 1366 and two PCIe 2.0 x16 expansion slots.
Intel Xeon X5660 6 core CPU @ 2.9GHz (upgrade from an i7 930 4 cores.2.8 GHz). with Cooler Master Geminii-S CPU Cooler
24 GB (6 x 4 GB) DDR3 1333 memory. (upgraded from 12 GB [6 x 2 GB] DDR3 1366)
Nvidia Maxwell Titan-X 12 GB.(upgraded from Gigabyte Maxwell GTX 750Ti 4 GB).
Yeah, decade old tech.
Planned upgrade
ASUS Prime H750+ LGA 1200 MB with PCIe 4.0 expansion slots and M2 slot
Intel i7-11700KF 3.6 GHz 8 core CPU with Noctua NH-D15S CPU cooler
64 GB DDR4 (2 x 32 GB) DDR4 3200 memory (with 2 open slots for future upgrade).
Samsung 980 500MB M.2 SSD.
Cost: 800$
Already have: EGVA RTX 3060 XC 12GB GPU (wiith backplate)
I think you're massively overestimating the amount of work the 1650 is doing here if you think this is like strapping a rocket to the roof. For the most part, the 1650 is doing the job of letting me use the system more freely while the 3060 is running. If I were prepared to leave the computer on its own, or just sit here typing up notes for a novel, then the 1650 hasn't got much lifting to do - it'd use about a GB of its VRAM for that job. One can actually often save more VRAM than that by simply remembering to restart DS in bounding box mode so that the textures aren't loaded for both OpenGL and Iray. This is less like a rocket on the roof, and more like turning off the air conditioning and taking the spare tyre out of the back.
The 1650 however gives me more option to be impatient and able to work on things while rendering like Blender modelling, or watching Youtube in 4K, or whatever. I don't have a lot of things to do in my life that don't involve using the computer.
And I did show a seven figure scene there that was unoptimised, with a modelled background. Do I think the 3060 could have handled that without the help? Yes, almost certainly, if I'd bothered to do things like bounding box mode and just left it to run undisturbed while I was out.
My threshold for resource problems is generally at least six characters, and maybe eight at a pinch if I'm smart about it. By ten, that's when I expect that I'll very likely be optimising. Certain things will pull that down, but I've *never* had an issue with a four character scene, even on occasions when I've forgotten to switch the monitor back to the 1650 after a gaming session, so I think your assessment that a 3060 might do four characters and a background isn't doing it enough credit.
(In any case, even if one does need it to run the monitor as well, W10 takes a fixed chunk of VRAM for that, and things like watching Youtube will be the same absolute VRAM usage on a 6GB card as on a 12GB card, so all these overheads are effectively proportionally half the size to a 12 GB card, so it can expect to handle scenes more than twice what a 6GB card can... particularly as there's only one 6GB RTX card, and any GTX card needs to waste VRAM loading RTX emulation libraries).
I don't think I am overestimating the 1650, and I am certainly not questioning your optimization prowess... I've seen, and love your artwork.
I just wanted to know if the 'rule of thumb' I had heard and stated above, regarding VRAM usage for 'modern' G8s," was close, or at least in the ballpark. .
Without any optimization, G8F+Hair+Clothes,,, how many could you "expect" to put in the randomly selected 'Neon Bar Environment' before the 3060, with 12gb of Vram, shat the bed and went to CPU? if the answer is 3 or 4, then the rule of thumb is kind of close. If its 6-8 figures... then the rule of thumb is wrong and I will cease to comment about it.