Adding to Cart…

Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
...true, but Blender (like Octane) has out of core rendering which Iray does not (but needs to have) hence the need for a big VRAM GPU.
I just don't know how you can claim that. I often hit the 8GB limit with only 3 G8/G3 figures and after the whole scene has been run through the Scene Optimiser. It is a constant battle for me to stay within the 8GB limit and I find that very limiting indeed. I too started with a 970 but it lasted only a few weeks before I had to splash out on a 1070 which is what I'm still using. If I had the funds I would go for the new 3090 but I haven't so I'm really hoping that a 16GB 3070 is not far away.
However, I am looking at other options like using Blender. I've been a bit disappointed that I have not been able to reproduce the IRay quality with Cycles yet but I'm a Blender novice while I've been using IRay for several years now.
I don't know what you're doing that I'm not but I almost never hit 8 Gb. I keep track of my logs pretty closely and I haven't had a render exceed my 2070 in months. That's almost 100 production renders for my VN's. That's why I didn't pull the trigger on 2 2070 supers to get the NVLink bonus. I could have sold my 1080ti and 2070 and made most of the cost of the cards back but it just didn't make any sense.
All I am doing is making scenes with a few (3 or less) characters and a few props. I can't fit the big sets like the landscapes with lots of trees, etc., or interiors with lots of furniture, because they will blow my VRAM limit. I use every trick I can think of to reduce VRAM usage. Scene Optimiser is good but take it a step too far and the seams on the skins begin to show. I have the drop-to-CPU feature disabled so that if it goes over 8GB all I get is an instant black render. That happens far too often for my liking so I'm paranoid about adding too much to the scene and obsessive about reducing the texture sizes.
So if you would like to share your secret, I'm sure that lots of us here would appreciate it.
Well, since the 3080 isn't working in Daz as of yet (read that in the benchmarking thread I think), definitely not regretting my plan to hold off for a couple of months on grabbing a 3090. Those werent' supposed to be available until next week anyways, but yeah, maybe around Christmas or early next year for me...
ATM, I'm more interested in Vermeer, and how it'll fare in the benchmarks. Since I need to build a new system anyways for the 3090 (won't fit in the computer case of my SFF build), well timing might work out pretty well for doing a build at the end of the year/early next year. Not really expecting next gen (4000 series) Threadrippers this soon, but that'd be nice too!
I also constantly find myself well over 8GB, and would even very often max out the 12 (or 11?) in my old Titan. It certainly must be due to workflow and process, but it's very unlikely I could, with any consistency, create the scenes I want to and have them fit within 8GB.
What resolution do you render at? I keep running into people who say 8Gb isn't enough who render at huge resolutions. If that's you try 1080.
This is a render from a chapter of my current VN that I did. It fit in 8Gb without a problem.
Yeah, no ... I render at 5:4 ratio at a modest resolution of 1600 x 1280 which I then enlarge to 2400 x 1920 in post (using an app that doesn't lose too much in the process of enlargement). I'd love to be able to render at higher resolutions but time is also a factor and they take far too much time to render.
One question though ... how do you determine exactly how much VRAM a scene will take? I use GPU-Z once the render is going but that seems to show that as much VRAM as possible is being used whether I add two characters or three. For example, I could have a scene with two characters and GPU-Z would report 7.3GB being used and I think - no way I can add another but I do and the figure only increases to 7.8GB. Add a fourth and the scene is too big and it fails. It just seems odd to me that a 3rd. character should only take 0.5 GB when the other two take 7 GB between them. The few props are negligable.
I suspect that IRay takes as much VRAM as it can and then starts compressing when it reaches a certain point.
GPU-Z and all other programs of that sort show how much is reserved not how much is in use. The two are not exactly the same. You should ignore it.
Keep in mind there is some basic minimum requirements that iRay needs to start a render. That consumes a fair bit of memory. Also every G8 is not created equal. Some have different base subD and therefore have very different base geometry, every level difference in subD quadruples the geometry further some G8's have 4k textures and some have only 2k (some newer ones have 8k textures). Those differences can really effect the amount of VRAM consumed. dForce hair is a huge VRAM hog. I don't currently use it, not due to running out of VRAM, although that did happen back in the spring when I tried it a few times, but because it took forever to simulate. I don't care about how long renders take. I let them run overnight using renderqueue but simulations have to run while I'm semi paying attention.
iRay always compresses textures that are larger than preset sizes, the settings are in the advanced render settings. The only way to know when a render has exceeded VRAM is to see it has dropped to CPU, or to have the CPU fallback disabled and have the render fail completely.
Interesting about GPU-Z and it only reporting reserved. As you say, I know when I've gone over because I get a black render.
I don't use dForce hair (don't have any) but sometimes use fibre hair (mostly the short men's styles). I always reduce 4K textures to 2K as I don't do extreme close-ups like facial portraits. These are my compression settings:
So any texture larger than 512x512 gets medium comperssion and anything larger than 1024x1024 gets high compression. I think those are the defaults. That should be pretty much all textures these days.
While this would be great, I have difficulty believing that Nvidia would wait to reveal these only after AMD launches. That would make Nvidia look weak and reactionary, as Ampere only just released. Like when Kepler was announced, they listed both 2 and 4gb versions of the 670 and 680. They didn't wait for AMD. Now don't go thinking that I believe these cards are fake. I do believe Nvidia has them at the ready. But not so soon. I don't believe we will see these cards until 2021, spring at the earliest. Unless AMD just stomps Nvidia.
Also, Nvidia is on record as stating that they carefully researched VRAM needs and determined that the 10gb in the 3080 was enough. To turn around just a month or two later and double that would be, well, dirty. It would really agitate some who bought a 3080. The people buying a 3080 right now are some of their more loyal customers, you don't want to agitate them so quickly. At this moment I cannot locate this quote, but I remember seeing it. It might have been in one of the youtube reviews, when a reviewer specifically asked Nvidia this question.
This topic is actually creating some tension in the gaming community. You have some people saying 10 is plenty enough, and some who say it wont be. I am firmly on the side of 10gb not being enough very soon, and not because of Daz.
Hardware Unboxed's review of the 3080 was able to pinpoint VRAM as being an issue in Doom Eternal. The 2080 8gb bottlenecks a little bit because of this. In fact, this was the real reason the 3080 doubles the performance of the 2080 in Doom Eternal. It turns out that DE uses 9gb of VRAM in the 3080, so it fits the 3080 fine, but bottlenecks the 2080's 8gb. They were able to alleviate this bottleneck by dropping the texture setting down. When they did this, the performance boost of the 3080 over the 2080 was much less than the massive 95% seen with texture settings.
The point being that this is an example of a game that pushes VRAM all the way to 9gb right now in 2020. More games will likely do this, and will probably go even further now that the new consoles are launching. As I said before the new consoles have some tricks up their sleeve to maximize VRAM usage in ways that PCs cannot. While Nvidia released a new feature, RTXIO, we haven't seen this in action at all, and I have my doubts it will be as fast as the PS5's ability to access the SSD. It may be a big improvement, but VRAM will still be necessary, so PCs will need more VRAM to play these kinds of games at high resolutions.
Also, real time ray tracing is also a huge VRAM hog. You need to have all of this stuff in memory for it to be accurately ray traced. If Nvidia is going to seriously push this tech, they need to have the VRAM to back it up, too.
As for Daz Studio VRAM, all the things that effect games memory apply. Obviously the more stuff in a scene the more memory it needs, but the fidelity of this stuff matters even more. Geometry can eat up memory when you crank up the subD. Texture data absolutely can, too. The number of material surfaces a object has adds up. We have hairs and clothing with well over a dozen different surfaces, and each surface may have many different settings and textures on top of that. If you look at older Daz products from when Iray was still new here, you will see a lot fewer textures and a lot fewer surfaces on these products. That is why most older stuff is more VRAM friendly, and also why it renders faster. That is simplifying things a lot, but that is where a lot of VRAM goes.
Also 3D scenes add to memory in different ways. If you have a room, how big is the room? Are the walls primitive planes, or are they full 3d geometry? Are there rooms connected? How many light sources? All of these choices can effect memory.
I think mattymanx's resource saver product does a great job on explaining things right in its promos. For real, just look at the promo pics for this and you may learn some things. It is really well done. https://www.daz3d.com/resource-saver-shaders-collection-for-iray
A great example is this pic here, which matty rendered with two 980tis. That means this entire scene, which has a full 3d background and 18 Genesis 2 characters was rendered with less than 6gb of VRAM. This is a master class for managing resources. This demonstrates that it can be done. There is no way this scene would fit even a 2080ti with 11gb without many of these tricks. And also, I will point this out, it renders a lot faster, too. So even if you had a GPU that could fit this entire scene without optimizing, it would take longer to render. Optimization is not just about fitting under your VRAM limit, it speeds up the render itself, too, because Iray doesn't need to perform as many calculations for the scene. Obviously it is super convenient to be able to load everything and just render away. But not taking some measures to optimize can make that render take a while longer. So at that point it becomes a question of which element is faster for the user, the time it takes to optimize versus the time saved for rendering. Unfortunately an exact number on that would be very difficult to come up with without actually testing and documenting it.
It is far less about how the public perceives them and far more about what makes them the most money. No doubt their plan was to introduce the other models ~a year from now to refresh the product stack and boost sales, that would be the saavy thing to do under normal circumstances. But the situation now is that there is a competitor launch coming up, and Nvidia will be wary of AMD's recent successes in the CPU market. If Big Navi turns out to be competitive and with more VRAM, the most business saavy reaction to that would be to release their higher VRAM models early to avoid losing as many sales to AMD.
Regardless of whether or not anyone 'needs' that higher VRAM amount. Its simple purchasing mentality to go for the bigger number, particularly when there isnt a huge price between the two. The only people that might purposely go for the smaller number if there isnt a huge price barrier between them, are the more tech saavy shoppers that really know what it means and what difference it makes. Most shoppers dont know that, and Nvidia knows that most shoppers dont know that, so they will act accordingly to suit their needs - i.e as many sales as possible.
Except Nvidia has known about AMD's launch well in advance. They have to had known for at least as long as we have, and probably much longer, that AMD was looking to October/November, that they could compete, and that they might be offering 16gb models.
So launching without any 20gb model, staying quiet about a 20gb model, and then just popping it out after AMD launches would be extraordinarily short sited on Nvidia's part. They wouldn't fool anybody with that ploy.
Making your customers unhappy is usually bad for business. Those are bridges that companies want to avoid burning. It would be a move that hurts them for the long term, so even if they did well for a couple months, it would bite them back after that. So this stuff matters.
And if selling GPUs was as simple as adding more VRAM, well AMD would have smashed Nvidia long ago. AMD has routinely offered 8gb versions of a wide variety of lower tier cards that beat Nvidia's price to performance over the 1060 and 1050. Nobody cared. AMD also beat Nvidia to 8gb with 390x back in 2015, it didn't matter.
And really, if was indeed that simple, then why not launch the 20gb right off the bat? The cost to them is not that significant in comparison to what they would charge.
Also, at these price ranges, many customers are going to know at least a little bit about what they are buying. If they are buying a GPU by itself, they have to at least know how to install it and run it. That in itself requires a little bit of know how, and people that do this tend to be just a little bit more knowledgable about what they are putting in their computers.
What makes you so sure that Nvidia has intimate knowledge of AMDs incoming products? they probably have better information than we do, but they dont know anything for sure until the products are released, same as us.
They are not trying to fool anyone, and its not a 'ploy'. its business.
Yep, making your customers unhappy probably is generally bad for business. But so is losing sales to a competitor when there is something they can do about that. They have market analysts that would know far better than you or i how to balance those two factors. But one thing those analysists will be advising, is not to act without all the information. i.e, how competitive their competitors product is.
I never said selling GPUs is as simple as adding more VRAM. What i said was, when there is the option between one with less or one with more, the bigger number is going to be the one that gets the sale when the other differences are marginal. There does not have to be logic to that, not everyone who is a buyer is saavy enough to know what is best for them. labels matter.
Why not launch them right off the bat? few reasons. 1 - like they said and like you said, more VRAM may not be needed. But when a competitor product has it, refer to my previous points. 2 - if the competition doesnt end up being competitive, it is more beneficial for them to sit on them for a year and then release them.
Not everyone is buying for themselves. Not everyone is buying for themselves AND building for themselves. System Integraters/OEMs exist. prebuilts exist. Yes, most people that build their own computers will know a thing or two. I think you vastly overestimate the percentage of people that build their own computers. Marketing is a thing. Can you think of any companies/products where the marketing is vastly overstating the product or the consumers need for it? i bet you can. The most valuable company in the world is a master at spinning the product to their target audience in a way that makes them believe all of it. And those products, especially recently, certainly have not been as innovative and advanced as they once were. Its all marketing.
Maybe they are holding off with the higher VRAM models to see how the 3090 with 24GB sells. Maybe they will conclude that more VRAM is a positive selling factor after all.
Here is a scene that wouldn't even CPU render until I build a new PC with 32GB RAM instead of 16GB RAM. As you see, it's a totally typical everyday average scene that isn't out of the ordinary for what DAZ 3D might expect customers to render using the models they sell.
It was rendered at FHD (1920x1280) but got reduced in size when I uploaded it to the DAZ Galleries.
I won't be trying to GPU render it until I get one of the GeForce RTX 30X0 GPUs. I know that iRay rendering with an nVidia GPU causes certain optimizations that don't happen with a CPU render but I don't think it's going to be able to optimize a render that takes over 24GB CPU rendering into 8 GB RAM on a GPU video card.
That's not a bad idea.
I'm largely in agreement with this.
They must know of the feelings (and this was perhaps indicated in the announcement), that their loyal customers were irritated over the 2000 series cost and what it provided.
So doing that over RAM would go against what they've apparently tried to do this release.
Perhaps, it's Nvidia trying to both have their cake and eat it, and they don't really believe AMD will compete, despite preparing for them to do so.
Guesswork at it's best. :)
Of course it's a ploy, which is part of business; fooling one's competitor, making them mistep is certainly a part of business dealings.
These aren't in the same league.
Loosing a sale for no particular reason other than perferring the alternative, well that's simple to counter - provide something better.
Loosing a sale to illwill; how do you get that back? How long will it take to get that customer back? How many folks who that customer talks to will decide to give it a miss.
regarding preferring the alternative, providing something better is precisely the point i was making. That is where the higher vram variants come in. Because while the need for higher vram is arguable by those who know and care, providing that exact thing once the competitor releases their offerings does exactly as you are suggesting.
Regarding your point about losing a sale due to an upset potential customer, that is kind of beside the point here. Because the only people who will be upset with the situation in question, are the people who have already made a purchase of the lower vram variant, the people who rushed in to buy them on release. Nvidia already has their money. They may be upset to learn of a better version mere months later, but they are not going to then go out and spend more on a competitors card just because of that. they already have a good card. its just a few percent worse than the newer one. So at worst, when it comes time for them to replace that card, in what...2 years at the earliest? then maybe then they will go with the competitors offering. Maybe. Most wont hold a grudge for that long, and even if they do, they are not going to waste money on a worse item when the time comes just to spite nvidia. If nvidia's options are still better at the time, they will buy another one.
You say they might talk to friends about their own experience. What will they say? dont buy the better nvidia card because they screwed me over? will those friends then buy an inferior product because nvidia screwed over their friend by releasing a new version of his card thats a few percent better than his? i think their friends will still buy whats best for them, because its their money.
No, it isn't. I thought this was well known. Those curved surface emissives are massive VRAM hogs. You can get away with one or two but you've got what a dozen?
No.
The only ones that bother, won't be the ones that already bought a card. I'm sure they will be more bothered, however.
Those that waited for whatever reason, will remember, or have others remind them, what Nvidia do. There is potential for long-term effects. It's speculation, if only just, as we don't know what's coming, but are basing it off leaked information that may or may be accurate and may or may not be deliberately misleading. We simply don't know.
Releasing the large RAM variants next year, would have much less of an impact than a month after the new cards release - that could be a very bitter pill to swallow for everyone.
Time will tell.
Take me for instance; I have two Nvidia cards and I was saving for a Titan, and after having the cash I held off; then I decided to wait for 3000; now I'm waiting for AMD: why? Because I object to being tied to a Nvidia ecosystem. I may decide to go Nvidia anyway, but that will because I will feel it's the best card, not because it's the only card. How would I use AMD? I moved to Blender. I still use Studio assets, but I do all my scene setup in Blender, along with everything else.
(A side effect for Daz is that I buy less, although that's only slightly related to moving to Blender. This time last year, I took advantage of the good gift card offers. I have over $300, but I've had as high as $1000; if the offer is good I tend to buy a card. I've passed a few times recently.)
Nobody likes feeling like they got jerked around. If Nvidia launches a bunch of new cards late next month that aren't much more expensive than the 3070 and 3080, and with the existing price points there really isn't a lot of room for them to slot in these super cards, then people who did buy the 3070, 3080 and 3090 are going to feel jerked around. They won't rush out to buy the new cards of course, and who will buy the new cards in the first place?, and they will be more prone to give AMD a look in the future.
Nvidia is not so stupid as to want to mess that up. I'd be very shocked to see these new cards before spring.
You dont think they will release early if Big Navi is competitive?
If i was a betting man I'd put money on it happening sooner if Big Navi can compete. Especially if their competing cards have more vram
Lets take the RTX 3080 for example. Nvidia does not know what the real world performance of Big Navi will be or how much VRAM AMD will have on it.
September RTX 2030 10GB launches @ $699.
October AMD announces Big Navi models... (Waht if AMD announces a 16GB model with better performance than the RTX 3080 that is priced at $650 on store shelves now?)
Nvidia will most likely counter this move by dropping the price on the 10GB RTX 2030 to $600 and adding a RTX 3080 SUPER 20GB at the $700 price point.
What Nvidia did this month is set the staring point for a "Near Peer" GPU price war.
Keep in mind that AMD has already announced a GPU with over half of the raw computing power of the RTX 3070 inside the APU that powers the Xbox Series X and costs $500 for the entire box. I wouldn't be suprised if stand alone Big Navi ends up being at least double that processing power.
That looks strange to me!
I often use 3 clothes and haired G8 characters in a complete indoor environment without optimizing, with 6GB of VRAM.
I rarely run out.
I hope so, that would be cool.
I'll repeat,
1) For who to buy? They'll have just released the 3070, 3080 and 3090. These Super cards will have to be more expensive. How much more isn't certain because Nvidia is clearly just making prices up but more is certain, note that no AIB is selling any of their cards at FE prices because Nvidia is apparently selling at or near a loss. So even if they release them they'll just gather dust. Sure the tiny Daz community will be happy but that won't drive sales.
2) If they are better performance and not just more VRAM, as the previous Super cards were, then the current buyers will justifiably be very angry and will sue. Nvidia doesn't want to hurt brand loyalty.
Some other things to keep in mind, Nvidia and the AIB's do not have unlimited manufacturing capacity. They just do not have the ability to be making all these SKU's this fast. That's one of the reasons there aren't more cards right now.
Also if they did a release like that they would run into some advertising issues in some US states and in the EU. Nvidia is sketchy enough without getting into that trouble.
From a marketing POV it makes no sense. They beat the consoles and AMD to market and they made their sales. Their prebuilts will be all over stores shelves this Xmas and will sell. If AMD comes out with competitive cards so what? Will it impact 3080 sales? 3080's are already sold. And whatever stock hits shelves between now and October 28 will sell out as well. It's unlikely any Radeon 6000's will be on shelves before the second week of november so 3080's will keep selling till at least then. Then the bots and scalpers will snatch them all up and sell them for inflated prices just like what happened to the 3080's this week so the 3080's will keep right on selling. There is no reason to expect that situation to change before Cyber Monday. At which Nvidia has made their sales for Q4. What's the incentive to release a bunch of new SKU's, annoy their retailers and customers?
I'm not convinced either that Super cards will be a thing mainly because I was under the impression NVIDIA wanted to make their line up a bit less confusing? 3070, 3080, 3090, Super for each, TI for each, Titan, blah...? But I linked the Super video anyway because why not