Adding to Cart…

Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
No, there is a choice between CPU and GPU Compute as they call it.
GPU and/or CPU
It has RTX for the rediculously priced RTX cards, which actually does perform very well - and then there is E-Cycles too that is specially designed to take advantage of Nvidia RTX cards - so I understand.
Note that GPU means Nvidia AND AMD cards
Diffeomorphic is a far better solution imo; pretty much bug free - at 1.5 beta isn't giving any issues as I use it. YMMV.
https://mobile.twitter.com/kopite7kimi/status/1295520974796816384
Looks like meat's back on the menu boys!
However, 3090 is shaping up to look like a cutdown titan instead of a 2080ti replacement.
The xx80ti is already a cutdown titan, and other than RAM, not much of a cutdown; a major difference is it is treated as a compute card
I did a test for g8f, g8m should be close to the same numbers I assume. At lvl 3 subdivision, g8f has ~1,047,552 polygons. At lvl 4 it's ~4,190,208 polygons. So it makes a bit of sense that going up to lvl 4 is going to have a pretty big jump in VRAM. Up to lvl 5 would be ~16,760,832 lol.
It would be cool if we had more control over subd, like keep all the areas that were covered by clothes at subd1, areas like the shin and forarms that are exposed but don't have any details that need a lot of polygons at subd2, but the face that needs to show a wicked scar bump it up to subd5. Like a subd weightmap or something.
So not a majority of RAM free but enough to make a difference on whether your scene got kicked out to the CPU. For 4 characters 300 MB is 1.2 GB and 4 geographs 60MB is 240 MB bringing the total to 1.45 GB. Some of that would be instanced possibly depending on your material sets & such. I know when I first started with DAZ Studio and had only a intel laptop with no discrete GPU and only 4 GB system RAM I could only CPU render 3 Genesis 3 characters in the DAZ Carnival sets (most of the carnival sets were occluded) before I ran out of system RAM. I was not zoomed in at all though on the characters but the renders were at 4K.
If they are wearing clothing then when the scene is sent to be rendered to the renderer it is rebuilt and the hidden parts of the models, including the body under the clothing, is deleted. I think that's why you do end up with rendering artifacts from time to time when that process doesn't go exactly right.
Now if you use iRay Preview rendering it's a completely different process and the hidden polygons aren't deleted I'm pretty sure but it would be nice to know for sure.
What I have also noticed is that the VRAM increase is not linear with the addition of figures. I guess this is to be expected if some share the same textures but I have had scenes where GPU-Z was reporting 7GB with only two characters, clothing and environment (props, buildings, etc.). Then I added a third character expecting it to drop to CPU but it still came in under 8GB. I would need to do a lot more experiments to determine whether shared textures were a factor but I also suspect that the compression algorithm used by IRay is not linear and is more pronounced as more content is added.
NVIDIA GeForce RTX 3090 Flagship Ampere Graphics Card Rumored To Cost $1399 US, Will Feature Massive 24 GB GDDR6X Memory
Oh boy. I'm considering selling my car and using the bus to get the money for one ;)
Seriously though, does that suggest that the 16GB 3070 might be true too - and at what price?
Keep in mind that any prices you hear are totally not final. Specs may be pretty final here, but prices are a wild card. So this next rumor is already highly questionable.
Some new rumors are saying the 3090 will release after all, and have a big price tag. I think it makes sense, in a way, it is all about the name. By calling it a "3090", that implies a higher tier than a "3080ti" would, thus why they would charge more for it. The 3090 in this rumor would have 24GB and cost $1400, the Founders Edition would be $1500. I know exactly what CEO Huang will say, he will compare the 3090 not to the 2080ti, but to the Turing Titan RTX. The Titan also had 24GB of VRAM, and the 3090 would absolutely smash the last gen Titan in performance. And the Titan was $2500. So the 3090 for $1500....hey that's a great deal!!! LOL. But seriously, it could be worse, IMO. If it indeed has 24GB of VRAM and offers so much more performance, I think a fair number of Dazzers will be happy to do that. Yes, I said Dazzers.
I'm sure the pricing will raise up some debate. I think I mentioned this before, about whether it is called a 3080ti or 3090. If it is called a 3080ti, then the pricing would be close to the 2080ti. But calling it a 3090 changes it up just a bit. The question is if the 3080ti exists at the same time, where does it fit in the lineup. Perhaps $1100? There is still a gap between the 3080 and 3090 to be filled.
The rest of the cards seem to have prices more in line with their Turing counterparts. Like $500 for 3070, that would equal launch price of the 2070. The Founders Edition of the 2070 was $600. The 3080 would be $800. The 2080 was $700, with the FE being $800.
I would assume these would be the lower VRAM versions, with the extra VRAM versions costing more.
BUT remember that prices can change in an instant. These are probably just place holders somewhere. A common phrase is that CEO Huang may not finalize the prices until an hour before the presentation, based on the information he has. He'll put on his best mic drop performance at this presentation to stomp AMD down, no matter what they release later on. But at the same time, he isn't going to go cheap, either. Like my example above, Nvidia will create a whole new tier in order to keep prices up. In the past the top card was called the x80, then they released a Titan and made it the top card. Then they added x80ti to the lineup. Its all in the name.
24Gb of GDDR6X by itself, cost to the manufacturer, is more than $400US (At best. These cards are the first with it. We have no idea about yield and if yields are not good price can go much higher.)
Micron has specifically said the card has 12Gb. They make the VRAM.
@outrider42 @kenshaw011267
I know, I know... cleary a triumph of hope over experience on my part. I'm just dreaming of $2800 plus an NVLink bridge getting me 48GB of VRAM in Cycles. I don't think it'd be as fast as my current system, but on final analysis, I really don't care.
...as it's "rumoured", still sceptical.
I'll keep saving up for that RTX Titan.
yeh, I'm waiting till i see what AMD do - as they work in Cycles; I object to being forced to deal with Nvidia.
I might chose to use their products but a render engine that only works on one brand is annoying at best.
Save yourself some cash and use Blender.
The most interesting take I have from all this guess work and possible speculation, is that if the 3090 turns out to be the new beast from the green team, it allows them the opportunity to add a TI variant in the advent that team red kicks their arse.
Yes, me too. Or doing odd jobs to save up and buy one. 24GB with realtime ray tracing is awesome and just what I need.
All I want is a RTX 3070 with 12 GB to 16GB of GDDR6
!
$400 For just the VRAM
Add in the cost of the GPU (NVidia won't tell but based on other chips figure $200)
Power components for the reference 2080ti came out to around $75
PCB (traces, layers etc.) ~$100
So ballpark of $775 for the parts (not for the manufacturing just the pieces).
It's all surface mount which is reasonably cheap, luckily. A few bucks per component (including testing and QA). But there are about a hundred components on each board. Call it $250 each.
$1025
No packaging. No design of the AIB card. No shipping. No warehousing costs anywhere in the supply chain. No Nvidia cut. Add those in and where does the profit come from?
A 3090 with 24Gb of VRAM is going to cost a lot more than $1400. There just no way the AIB's can make any money at that price point. Math remains a thing.
$400 VRAM would be a hypothetical made up list price for consumers I think.
I think the 3090 will be less than $1400, if not on release then not long after when the top of the line Big Navi is announced.
The only consumers for GDDR6X chips are GPU card and cell phone manufacturers. We know precisely what GDDR6 costs to manufacturers, Micron and others make those price lists publicly available (this is required in several places as part of anti trust laws). Micron has said how much more expensive they expect GDDR6x to be initially (2.5x at least which is also what GDDR5x was). You are welcome to go find the current cost of Micron's GDDR6 chips and do the math yourself. Everything I posted is based off public data.
There is a reason people who know the semiconductor industry have been very skeptical of these claims of massive increases of VRAM amounts on Ampere. They costs of GDDR6 is too high to allow it on the cards. The idea that the 3090, the 2080ti replacement would have 24Gb of GDDR6X and be a mere $100 higher than the 2080ti is just not credible.
Add in that Micron itself has said that the card has 12Gb of VRAM...
Everything you posted may be public data, but it is not relevant because that is not how a $300 billion company procures. Unless you are a true Competitive Intelligence professional, you can't be sure what price NVidia has negotiated, nor what kind of futures contract they were able to buy. You probably just discovered the upper limit.
I'm not saying that the rumors are credible, just that the certainty with which you write is unwarranted because you can't possibly know what you would need to know to be so certain. Math is certainly still a thing, but so is "trash in, trash out".
Nvidia buys hardly any of these chips. The partners do and I absolutely do know what they pay because as I just stated Micron et al has to publish what they charge. And none of the partners sell enough volume to get any sort of discount.
There are three companies that make all the RAM. Micron, Samsung and SK Hynix. So when the demand/price fluctuates they can shift production from one sort to another. This greatly affects the enterprise world so it is tracked very closely. I get the DRAM prices in an email every morning. I can easily enough go to the same source and get the GDDR prices. These are not secrets. After the companies got busted the last, or the time before that, for price fixing they had to start publishing their pricing.
I'm not much of a tech hound, the two new computers I got this year was the 1st brand new hardware I've bought since 2006, but I've read that 12GB number too from one of the many articles posted to these forums but but instead of what you claim: when I read this 12GB RAM claim, hold onto your hat, the 12GB RAM claim was per side of the GPU card so that the card actually has 24GB RAM total. I've never seen a GPU like that but apparently they are needed sometimes. I have seen plenty of RAM memory sticks that are double sided though.
I don't believe for a second nVidia is paying $400 USD for 12GB GDDR6X RAM and I don't believe for a second the 3090 is going to have only 12 GB RAM, not only because what all those rumour sites claim but because we all already know what is already on the entire lineup of nVidia 20XX series of GPU. A 3090 card with only 12GB RAM would be a major blunder on nVidia's part, one I don't think they will make. They know to make realtime raytracing real fast their GPU cards must have enough RAM their GPU has superfast access to, there is no way around it.
It doesn't really matter what we believe, or how much we've read about on the net.
... Until it's anounced it is at best speculation, with much of it being guesswork if not pure guesswork.
When the time comes, some will claim to have been right; that isn't because they had accurate data, only that some guesses always stand a chance of being correct.
I never said $400 for 12Gb, I wrote $400 for 24Gb. And Micron did not say anything at all about it being one side of the card. They just said it would be 12Gb. Putting chips on both sides of the card is less than ideal on a consumer card. They need to be cooled and would mean adding some sort of cooling to the backside of the PCB making the cards even thicker messing with spacing even more. Instead they would just go with 2 Gb chips. That's how the current RTX Titan works. That would be far better than making a card that is not just 2 or 3 slots wide but also protrudes backwards.
Going up 1 Gb from the 2080ti would be fine. It's not like there are any games out there that are pushing the boundaries of 11Gb. Consumer cards are sold to gamers and for gamers 8Gb is still more than enough (Cyberpunk is the big touchstone release on the horizon and the recommended HW is a 2060 so it will run with 6Gb of VRAM). They're bumping up to 12 to match the consoles for those who play at 4k, still a tiny minority of gamers.
Agreed. This is a complete load of bollocks, to be honest. Anyone believing that Nvidia is going to put 24 Gb VRAM into their "flagship" graphics card that will be supposedly near equal to or more powerful than a Titan RTX(having the same amount of VRAM) and then sell it for half the price of a Titan RTX is living a pipe dream. I don't see there being any significant increase in the VRAM amounts. There's just no point to it.
...