FAO of 2080ti owners - a question about memory

JB007JB007 Posts: 119
edited October 2020 in The Commons

With the new release of the Ampere generation I'll be building a new system sometime before Christmas - with the big decision for me being 3080 or 3090.

Now, from what I do renderwise - performance isn't really the big issue for me .. we'll probably see upto a 20% increase when it comes to the 3090 and given I don't do huge animations, the render difference between a 60 minute render and a 48 minute render doesn't interest me. That saving of 12 minutes doesn't seem to compute when you consider a $800 extra price tag.

*** NOTE : Ideally, we'd have benchmarks to check if this 20% difference is actually the case but .. well, once again, we're waiting on Daz for that so .. we just have to guess whether to spend $700 or $1500 .. so, well done Daz

No, the biggest deciding factor for me is VRAM.

I currently have a 6gb 980ti and often hit the "drop to CPU" wall .. 

Again, for what I do .. a 24gb 3090 seems a bit OTT .. even the rumoured / probably on it's way 20gb 3080 seems a bit overkill .. but, a 10gb 3080 doesn't "feel" like it's enough ...

So my question to the 2080ti owners .. just what punch do your scenes have when you fill your 11gb .. how many characters / geometry / 4k textures can you fit in there before it gives up the fight?

 

 

 

Post edited by JB007 on
«1

Comments

  • Don't forget that DS doesn't yet support the Ampere cards.

  • thd777thd777 Posts: 943
    edited October 2020

    I use an ASUS RTX2080ti as my primary render card. Here is an example of a fairly big scene. It has 8 G3 and G8 characters with clothes, The DA cat and dog with fur, the wyvern and the ROG Inn. It used 10.5 Gb of VRAM on the card when rendered at 3000x1688. The scene is of course heavily optimized, i.e. texture sizes reduced etc.

    Ciao

    TD

    Evening at the Inn 2.jpg
    3000 x 1688 - 1M
    Post edited by thd777 on
  • hjakehjake Posts: 988
    edited October 2020

    Don't forget that DS doesn't yet support the Ampere cards.

    Also remember that for consumer Nvidia graphics adapter like the 3060/70/80/90 they use the Windows WDDM to manage VRAM use so you do not get the full VRAM capacity. To use all the card's memory you would need a Nvidia card that supports TCC (the Quadro cards). I have a 2070 Super and I can use a bit more than 6GB of the total 8GB due to WDDM.

    Due a Bing search for Nvidia WDDM vs TCC for more information.

    These link also discuss the issue:

    https://www.daz3d.com/forums/discussion/428026/nvidia-ampere-2080-ti-etc-replacements-and-other-rumors/p1

    https://www.daz3d.com/forums/discussion/245991/daz3d-and-iray-in-tcc-mode

     

     

     

     

    Post edited by hjake on
  • WDDM reserves substantially less than 1 Gb not almost 2Gb.

  • HavosHavos Posts: 5,403

    If you use the very latest Windows 10 (2004) the "lost" VRAM is now about 900 MB. So for a 10GB card you should have 9.1 available. I have confirmed this behaviour on 3 separate cards.

  • nicsttnicstt Posts: 11,715
    thd777 said:

    I use an ASUS RTX2080ti as my primary render card. Here is an example of a fairly big scene. It has 8 G3 and G8 characters with clothes, The DA cat and dog with fur, the wyvern and the ROG Inn. It used 10.5 Gb of VRAM on the card when rendered at 3000x1688. The scene is of course heavily optimized, i.e. texture sizes reduced etc.

    Ciao

    TD

    Hmm, that's one of the best renders I've seen; it tells a really great story.

  • lilweeplilweep Posts: 2,561
    JB007 said:
     

    So my question to the 2080ti owners .. just what punch do your scenes have when you fill your 11gb .. how many characters / geometry / 4k textures can you fit in there before it gives up the fight?

     

     

     

    i can run out of vram before adding any characters if i make the environment too complex.  i have a tendency to add lots of poorly optimised assets from other marketplaces that are high poly and perhaps too many textures.

    with 2080ti, i still usually do 2 renders - one with props+lighting and then a second render with minimal props+lighting+chracters on beauty canvas so i can superimpose.

    At minimum it's usually 2 renders.  this is still easier /lazy option for me to render twice rather than optimising

  • thd777thd777 Posts: 943
    nicstt said:
    thd777 said:

    Hmm, that's one of the best renders I've seen; it tells a really great story.

    @nicstt Thanks. That's my goal. I try. I'll probably succeed in 1 out of 50 or so... this one is definitely one of my all time favourites. I think it's the third version of this concept. The older ones are in my DA gallery if you are interested. 
    ciao

    TD

  • thd777 said:

    I use an ASUS RTX2080ti as my primary render card. Here is an example of a fairly big scene. It has 8 G3 and G8 characters with clothes, The DA cat and dog with fur, the wyvern and the ROG Inn. It used 10.5 Gb of VRAM on the card when rendered at 3000x1688. The scene is of course heavily optimized, i.e. texture sizes reduced etc.

    Ciao

    TD

    I agree with @nicstt technical proficiency aside, I really like this render. It's like the wizard with the familiar is totally laughing to himself about the ridiculousness of the tall tales the two adventurers are telling the tavern wench because he did the same thing back in his day :)

  • hjakehjake Posts: 988

    WDDM reserves substantially less than 1 Gb not almost 2Gb.

    My apologies, you and Havos are correct. When I got home I checked that computer and it is reserving 900mb of VRAM.

  • nicsttnicstt Posts: 11,715
    hjake said:

    WDDM reserves substantially less than 1 Gb not almost 2Gb.

    My apologies, you and Havos are correct. When I got home I checked that computer and it is reserving 900mb of VRAM.

    I think the amount has been reduced recently from a post I saw.

    thd777 said:

    I use an ASUS RTX2080ti as my primary render card. Here is an example of a fairly big scene. It has 8 G3 and G8 characters with clothes, The DA cat and dog with fur, the wyvern and the ROG Inn. It used 10.5 Gb of VRAM on the card when rendered at 3000x1688. The scene is of course heavily optimized, i.e. texture sizes reduced etc.

    Ciao

    TD

    I agree with @nicstt technical proficiency aside, I really like this render. It's like the wizard with the familiar is totally laughing to himself about the ridiculousness of the tall tales the two adventurers are telling the tavern wench because he did the same thing back in his day :)

    That could be it, or he could be looking at her bum!

  • fastbike1fastbike1 Posts: 4,078
    edited October 2020

    @JB007 "but, a 10gb 3080 doesn't "feel" like it's enough"

    I'm not sure why you think so. The 1 GB between a 2080TI and a 3080 isn't enough to matter for most cases. If you're just over the available on a 3080, it will be simple to subsitute a smaller texture somwhere that doesn't matter.

    The 3080 is significantly faster that a 2080 TI and less expensive. You're going to have to buy a used 2080 TI and whos knows it's condition. 

    I wett from a 980TI to a 3080. The 3080 is not only much faster, but uses less power and is about 15 C cooler when rendering. It also uses about 15% less VRAM than the 980TI, sometimes more.

    Post edited by fastbike1 on
  • Ghosty12Ghosty12 Posts: 2,068
    edited October 2020

    If you use Tech Powerups GPUZ on the sensors tab you can give a estimate of ram used, although it is not the reserved amount it is a rough guestimate.. I find it handy for when I render, I use the denoiser option quite a lot, and if it activates then I know I have enough vram available..

    I have a 1070Ti and at times I have had the amount of vram used at between 7.3 to 7.4 GB, and have had the denoiser activate.. And that is with a heavily optimized housing scene, with 4 Genesis 3 characters hair and clothing..

    Another thing in my search for trying to reduce Windows need for GPU Vram.. I came across a suggestion of using the motherboards integrated GPU to drive the monitor thus freeing up the videocards Vram.. https://www.deviantart.com/dhruv3d/journal/Boosting-IRAY-renders-speed-in-DAZ-Studio-615736235

    Post edited by Ghosty12 on
  • nicsttnicstt Posts: 11,715
    edited October 2020
    fastbike1 said:

    @JB007 "but, a 10gb 3080 doesn't "feel" like it's enough"

    I'm not sure why you think so. The 1 GB between a 2080TI and a 3080 isn't enough to matter for most cases. If you're just over the available on a 3080, it will be simple to subsitute a smaller texture somwhere that doesn't matter.

    The 3080 is significantly faster that a 2080 TI and less expensive. You're to have to buy a used 2080 TI and whos knows it's condition. 

    I wett from a 980TI to a 3080. The 3080 is not only much faster, but uses less power and is about 15 C cooler when rendering. It also uses about 15% less VRAM than the 980TI, sometimes more.

    Just remove all mouth textures. Reduce eye textures to 1k or even 512. They rarely need to be more, but I only bother if needed. I certainly like having the high resolution as upscaling is not effective or in any way useful.

    Removing mouth textures is my first option, except obviously where teeth show and are fairly close to the camera. From a distance the colour of mouth and teeth only needs to be approximate, even when the moth is open.

    Another option with teeth and mouth is to have all models have the same. For very slight variations the shader colour can be altered. Even eye textures can be shared with variances in diffuse colour, but this only works if the eye colour can't be determined from a distance, other than lighter or darker.

    Post edited by nicstt on
  • JB007JB007 Posts: 119
    fastbike1 said:

    @JB007 "but, a 10gb 3080 doesn't "feel" like it's enough"

    I'm not sure why you think so. The 1 GB between a 2080TI and a 3080 isn't enough to matter for most cases. If you're just over the available on a 3080, it will be simple to subsitute a smaller texture somwhere that doesn't matter.

    The 3080 is significantly faster that a 2080 TI and less expensive. You're to have to buy a used 2080 TI and whos knows it's condition. 

    I wett from a 980TI to a 3080. The 3080 is not only much faster, but uses less power and is about 15 C cooler when rendering. It also uses about 15% less VRAM than the 980TI, sometimes more.

    The point is though is I don't want to be subbing a smaller texture somewhere if I don't need to .. this all adds time into the workflow .. so if it takes 30 mins back and forth with scene optimizer or having to reload a scene to clear out memory .. then that's time that cuts from the gains in rendering .. I want to be able to put a scene together that I sometimes have to do and hit render.

    Now the post that @thd777 put up top is the info I'm looking for (and cool render too!) .. that's probably the max size scene I've done in the past and previously I've had to optimize, split into parts, selectively render different bits of the scene and then stitch them together in photoshop. Which of course makes everything take 5 times longer than it needs to be even before we take into account rendering speed on my 980ti vs a 2080ti, for instance. I need rid of all that as when I'm on a project I've having to knock out a dozen or so renders of that size over the course of a week and so time saving is everything.

    Now I accept there's always a balance to be found and tweaks and twiddles we can make - but that's why I'm asking the question .. What can fit on a 2080ti with 11gb (and can therefore extrapolate what a 3080 10gb could do) .. without those tweaks / twiddles and optimisations. Is it 4 genesis 8 characters? 6? 8? .. and I know every scene is different .. but .. on average .. 

    I'm trying to work out whether I'm going to have be spending $700 or $1500 (.. with maybe a $999? option when the 20gb 3080 comes) .. it's a lot of money if you end up making the wrong choice.

  • With 11 gigs I could fit 3 G8s with dForce hair, but not 4.

    nicstt said:
    hjake said:

    WDDM reserves substantially less than 1 Gb not almost 2Gb.

    My apologies, you and Havos are correct. When I got home I checked that computer and it is reserving 900mb of VRAM.

    I think the amount has been reduced recently from a post I saw.

    thd777 said:

    I use an ASUS RTX2080ti as my primary render card. Here is an example of a fairly big scene. It has 8 G3 and G8 characters with clothes, The DA cat and dog with fur, the wyvern and the ROG Inn. It used 10.5 Gb of VRAM on the card when rendered at 3000x1688. The scene is of course heavily optimized, i.e. texture sizes reduced etc.

    Ciao

    TD

    I agree with @nicstt technical proficiency aside, I really like this render. It's like the wizard with the familiar is totally laughing to himself about the ridiculousness of the tall tales the two adventurers are telling the tavern wench because he did the same thing back in his day :)

    That could be it, or he could be looking at her bum!

    I think your explanation is more likely.

  • lilweeplilweep Posts: 2,561

    wait for 20+gb imo

     

  • With a bit of work, 11 gigs will get you a very long way.

    Converting all maps not near the camera from 4k to 2k will help a lot.

    Avoid dForce hair wherever possible because each strand is actually a mesh, and when subdivided, it can take up literally gigs.

    And also consider that more than 3-4 characters rarely interact much, they cluster, and you can composite in post. Shadow catchers may not even be necessary.

  • JB007JB007 Posts: 119
    lilweep said:

    wait for 20+gb imo

     

    That's probably the ideal .. but .. is it announced the week after Big Navi is released? .. or released next March? Or this time next year? Who knows .. ?

     

  • Ghosty12Ghosty12 Posts: 2,068
    JB007 said:
    lilweep said:

    wait for 20+gb imo

     

    That's probably the ideal .. but .. is it announced the week after Big Navi is released? .. or released next March? Or this time next year? Who knows .. ?

     

    Probably be how much Big Navi scares Nvidia, though Nvidia may want to wait as releasing a 3070/3080 with more vram so soon after the initial launch of the cards, would tick off a lot of people..

  • JB007JB007 Posts: 119
    Ghosty12 said:
    JB007 said:
    lilweep said:

     

    Probably be how much Big Navi scares Nvidia, though Nvidia may want to wait as releasing a 3070/3080 with more vram so soon after the initial launch of the cards, would tick off a lot of people..

    I tend to agree with you .. but the other side of me asks : Can Nvidia afford to sit by for 6 months and let Big Navi beat the 3080 on performance (maybe .. hey, some are suggesting it might be up there with the 3090), beat it on memory and probably beat it on price .. 

    I don't know - maybe Nvidia will leave it to the board partners so they can hold their hands up and say "hey! Nothing to do with us" .. 

    But hence the dilemma .. 10gb or 24gb .. there's too big a gap in gb's and price for there NOT to be something in between .. it's how long I / we can afford to sit around twiddling our thumbs waiting to find out what.

  • nicsttnicstt Posts: 11,715

    I still thnk the 3090 FE card is decent (semi) price, lots extra RAM and some extra performance, and the option to add another with Nvlink to increase the RAM pool. The one thing missing is that it isn't a compute card, despite what Nvidia hinted at. Windows will steal some RAM.

  • RaukoRauko Posts: 38
    edited October 2020
    nicstt said:

    I still thnk the 3090 FE card is decent (semi) price, lots extra RAM and some extra performance, and the option to add another with Nvlink to increase the RAM pool. The one thing missing is that it isn't a compute card, despite what Nvidia hinted at. Windows will steal some RAM.

    It would seem it's the better option.

    Post edited by Rauko on
  • outrider42outrider42 Posts: 3,679
    edited October 2020

    The concept of optimizing a scene does not have to be too time consuming. If you take a few minutes, you can create material presets for your most used items. You can can have a full preset for the up close renders, one that reduces sizes for when they are farther away. When it comes to the things like the mouth, you can create presets that ONLY remove the mouth textures and do not effect anything else at all. Just remove the textures, save as preset, and make sure that only the mouth surfaces are checked. Done. Now you have a preset to instantly remove mouth textures. If you want them back, reload the original preset. Bang, it can literally be like an "Easy" button.

    Now addressing the 2080ti, again, that is just 1gb of space. Meanwhile, the 3080 will be roughly TWICE as fast as the 2080ti at rendering Iray. So that 60 minute 2080ti render becomes about 30 with a 3080, it might even be faster than that. Surely that is more than enough time to make the couple of small changes you might need to get under 10gb. I don't know about you, but I personally would take that rendering speed every single time if we are talking a single gb difference.

    How do I know this? Because the Iray Dev Team themselves have released their own benchmarks in Iray 2020.1. I've posted this a few times already, but here it is.

    In this chart, they compare Quadros to the 3080. The Quadros listed are top cards. The RTX 6000 is slightly faster than a 2080ti. The P6000 is slightly faster than a 1080ti. The 3080 easily wipes the floor with these cards in every scene. The average performance increase is 2.1 times over the RTX 6000.

    Now for people questioning if this is the Iray that Daz Studio will get...Iray is Iray. Some specific features may not be used, like how Daz does not have motion blur, but the core component of Iray is exactly the same. Thus the performance gains here should be on par with what people see. It may depend on scene complexity, generally the more geometrically complex a scene is, the more of a performance gain you see.

    As for the mythical 20gb 3080, there is simply no guarantee of when this releases, or even IF it releases. AMD would need to shock the world to force Nvidia to release it early. Now this is my opinion, but I believe it is the correct one, LOL. Otherwise, Nvidia will not release 20gb versions until about mid 2021. And besides, even if they do release it, do not expect to be the same price, or even close. I think it would run around $1000, maybe more.

    The reason for this is because Nvidia never announced them. If Nvidia had made the announcement back in September, then sure, they could release sooner. But they didn't. Without any announcement for them it would be almost deceptive to release double capacity cards so shortly after launch. That has never happened. I expect AMD to compete, but just competing is simply not enough to sway that many people. You can try to make arguments, but Nvidia is the champion. Not only is Nvidia the champ, but they have been completely uncontested in the high end for a very long time. So even if AMD squeaks out a small win at a few games, that will not change things over night. Many gamers are entrenched with Nvidia, some players have G-sync only monitors, meaning that AMD GPUs will not support that feature. So anybody who has a G-Sync screen would have to buy BOTH a new monitor and AMD GPU in order to fully ditch Nvidia. That is certainly not going to happen if the 3080 is still faster at most games. 

    We saw this with Ryzen. The first gen Ryzen was a revelation, but it did not outsell Intel. It took nearly 2 years for Ryzen to really catch up and start selling on par with Intel. Nvidia is in a much better position than Intel, too. I really don't get why some people think AMD is going to come out and destroy Nvidia...they aren't, even if they are faster. AMD has already effected Nvidia and Ampere. You goy a $700 3080 based on the top GA102 chip that is clocked as high as it can be. They tuned it up so much that it uses 320 Watts. None of this would have happened with the threat of AMD looming. I would wager that without any AMD threat the 3080 would have been on GA104, which is where the 3070 currently is, and it would still cost over $800. If you look, there were rumors about the 3080 originally being planned for GA104. The x80 has been based on Gx104 for several generations, so this change was a big one.

    Post edited by Chohole on
  • hjakehjake Posts: 988
    edited October 2020
    nicstt said:
    hjake said:

    WDDM reserves substantially less than 1 Gb not almost 2Gb.

    My apologies, you and Havos are correct. When I got home I checked that computer and it is reserving 900mb of VRAM.

    I think the amount has been reduced recently from a post I saw.

    thd777 said:

    I use an ASUS RTX2080ti as my primary render card. Here is an example of a fairly big scene. It has 8 G3 and G8 characters with clothes, The DA cat and dog with fur, the wyvern and the ROG Inn. It used 10.5 Gb of VRAM on the card when rendered at 3000x1688. The scene is of course heavily optimized, i.e. texture sizes reduced etc.

    Ciao

    TD

    I agree with @nicstt technical proficiency aside, I really like this render. It's like the wizard with the familiar is totally laughing to himself about the ridiculousness of the tall tales the two adventurers are telling the tavern wench because he did the same thing back in his day :)

    That could be it, or he could be looking at her bum!

    It seems you may be correct. Attached is a screenshot of my system just after start-up. Just 300mb. That is a BIG reduction since when I checked on this issue way way way back in Jan 2020. Remember the good old days?  :-)

    2020-10-04_162444.png
    1614 x 1420 - 107K
    Post edited by hjake on
  • fastbike1fastbike1 Posts: 4,078

    @JB007 "Can Nvidia afford to sit by for 6 months and let Big Navi beat the 3080 on performance"

    If you want ot use Iray, Big Navi isn't going to make a difference. Frankly, AMD has a habit of having the next Nvidia or Intel killer just around the corner, yet when they are released, they don't seem to impact the other companies much at all.

  • nicsttnicstt Posts: 11,715
    edited October 2020

    Just like Intel thought they didn't have to worry.

    I use Blender, and whilst I would prefer a card that will allow me to use Iray, I'll give up that opportunity for a better (more ram/ faster / cheaper) card.

    ... Does that mean I think AMD will beat the 3090?

    No idea, but from all the leaks I've see and the discussions around that, I'm thinking it will be about on par; about the 3080 with more RAM for less - that is very tempting.

    Post edited by nicstt on
  • PerttiAPerttiA Posts: 10,024
    nicstt said:

    Just like Intel thought they didn't have to worry.

    I use Blender, and whilst I would prefer a card that will allow me to use Iray, I'll give up that opportunity for a better (more ram/ faster / cheaper) card.

    ... Does that mean I think AMD will beat the 3090?

    No idea, but from all the leaks I've see and the discussions around that, I'm thinking it will be about on par; about the 3080 with more RAM for less - that is very tempting.

    Looking at the situation through gamer goggles gives somewhat one-sided view.

  • HavosHavos Posts: 5,403
    hjake said:
    nicstt said:
    hjake said:

    WDDM reserves substantially less than 1 Gb not almost 2Gb.

    My apologies, you and Havos are correct. When I got home I checked that computer and it is reserving 900mb of VRAM.

    I think the amount has been reduced recently from a post I saw.

    thd777 said:

    I use an ASUS RTX2080ti as my primary render card. Here is an example of a fairly big scene. It has 8 G3 and G8 characters with clothes, The DA cat and dog with fur, the wyvern and the ROG Inn. It used 10.5 Gb of VRAM on the card when rendered at 3000x1688. The scene is of course heavily optimized, i.e. texture sizes reduced etc.

    Ciao

    TD

    I agree with @nicstt technical proficiency aside, I really like this render. It's like the wizard with the familiar is totally laughing to himself about the ridiculousness of the tall tales the two adventurers are telling the tavern wench because he did the same thing back in his day :)

    That could be it, or he could be looking at her bum!

    It seems you may be correct. Attached is a screenshot of my system just after start-up. Just 300mb. That is a BIG reduction since when I checked on this issue way way way back in Jan 2020. Remember the good old days?  :-)

    However I don't think that graphic really tells you how much VRAM is available to DS, because Windows 10 will not let any one app grab all that is available. To see that you need to run Daz Studio, and then do a simple test render. Open the DS log and search for "gib". This will say you have a card with 8.0 GiB of VRAM available, and should state that you have a bit over 7 for use in DS.

  • nicsttnicstt Posts: 11,715
    edited October 2020
    PerttiA said:
    nicstt said:

    Just like Intel thought they didn't have to worry.

    I use Blender, and whilst I would prefer a card that will allow me to use Iray, I'll give up that opportunity for a better (more ram/ faster / cheaper) card.

    ... Does that mean I think AMD will beat the 3090?

    No idea, but from all the leaks I've see and the discussions around that, I'm thinking it will be about on par; about the 3080 with more RAM for less - that is very tempting.

    Looking at the situation through gamer goggles gives somewhat one-sided view.

    yeh that is true; but AMD compute was usually better for a card that was beaten by Nvidia's gaming card. In other words, compute was more effective; will it be same? NFC.

    ... Which is why I'm happy to wait. My advantage is keep my cash, whereas Nvidia (or whoever) always wants us to spend, spend, spend.

    If I can get an FE 3090 for 1400, I might get one, but either way I'm not bothered.

    Nvidia has really screwed up the release, and the highly suspicious, synical conspiracy theorist that is I, wonders why.

    Post edited by nicstt on
Sign In or Register to comment.