Daz Prayers Answered? 4060 Ti w 16GB VRAM!

24

Comments

  • The Reviews on the 8GB 4060Ti are starting to come in... launch is tomorrow.  TLDR: Basically, 3070 Performance.

    The Nvidia GeForce RTX 4060 Ti brings true mainstream pricing to the Ada Lovelace architecture and RTX 40-series GPUs, starting at $399 for the Founders Edition and reference-clocked models. Unfortunately, it also brings a lot of potential compromises into play, chief among them being the 128-bit memory interface and 8GB of VRAM. Nvidia has a potential solution for the capacity problem with a 16GB model planned for release in July, but it won't address any concerns with the memory interface.

    Is the RTX 4060 Ti one of the best graphics cards? That largely depends on how many games you play support DLSS 3 and whether you're willing to trade latency for AI-interpolated FPS. Looking at native performance in our GPU benchmarks hierarchy (which will be updated later today), the RTX 4060 Ti comes in just ahead of the RTX 3070 at 1080p, but falls behind the RTX 3060 Ti at 1440p and 4K.

    So the good news is that the RTX 4060 Ti is generally faster than the previous generation RTX 3060 Ti at the same price while using less power. It also supports new Ada features like DLSS 3 Frame Generation, SER, DMM, and OMM. The bad news is that it barely surpasses its predecessor overall, and design decisions made years ago are certainly at play.

    Let's dive into the spec sheet to see what the Nvidia RTX 4060 Ti offers.

     

  • PerttiAPerttiA Posts: 10,024

    nonesuch00 said:

    Padone said:

    Windows 10 only allocates some vram on the card connected to the monitor. It doesn't allocate anything if the card is not connected. In my case I connect the monitor to the mobo so it uses the ryzen apu. This way the nvidia card is reserved for rendering and the allocated vram is zero until I render.

    Of course if the cpu doesn't have graphics then windows will take the nvidia card you have no choice in this case.

    That is what I do but I still run out of VRAM (12GB) for some renders.

    It is too bad that nobody using DS 4.15 on W10 with monitors connected to the integrated GPU, has provided detailed information on their VRAM usage (from GPU-Z) together with the VRAM usage from the DS log. That would give us the truth about how much VRAM is used at different stages of the process (with no application running, DS started, Scene loaded and while rendering)

    Nvidia removed the VRAM usage for geometry and textures from the DS log in versions after DS 4.15, so getting the data from GPU-Z with newer versions of DS is not enough as we don't know how much VRAM the geometry and textures are using.

    If we did have such information, we would have proof that driving the monitors with the integrated GPU, does free up VRAM on the Nvidia GPU.

  • outrider42outrider42 Posts: 3,679

    Torquinox said:

    outrider42 said:

    The standard expectation is for the new generation to improve these areas without a big price increase. While MSRP was not real for a long time, the 3060 was $320 MSRP. The 4060ti is $180 more. So it SHOULD be better at everything, lol. I managed to get my 3060 under $400 during the peak of COVID.

    A big part of the problem with the 4000 series is that price to performance has been terrible compared to previous generations. The 4090 is the only card to really improve this mark. The 4080 is faster and has more VRAM than a 3080, but the difference in price doesn't add up. The 4080 offers far worse price to performance than the 3080. The 4070ti did the same thing vs the 3070ti. The 4070 finally got a little closer, but its $100 increase over the 3070 still hurt its overall value.

    The 4060 is the very first 4000 series to launch at a price lower than its predecessor. But even here we are only talking $20, and the 4060 has less VRAM. The 4060ti 8gb matches the launch price of the 3060ti, while the 4060ti 16gb has a $100 premium. This is also paired with the fact that these cards are expected to deliver modest performance increases of their predecessors. Making it it even worse is that any 8gb GPU released in 2023 is going to get murdered by reviewers unless it is really cheap.

    The 4060ti 16gb at $500 is slightly better...but not terribly exciting. The GPU market has crashed. It has crashed so bad that Nvidia is stopping production of 4070s because they cannot sell them. They do not have a good excuse to charge what they are today. They can reduce the prices and still make a good profit. The world is going into recession (some would argue its already there), this is a time period when historically goods shave a little of their margains to keep moving units. But unlike previous eras some markets refuse to drop. Just because prices got wild during COVID doesn't mean they should say that way. The shortages are long over for many goods. This has been coined as "greedflation". I don't think any of the GPUs will sell that great. They are also competing with outgoing 3000 stock.

    Strongly reasoned response. I understand. Maybe 4000 series is one to skip. Even so, this is the world we live in. Among 4060s, the16GB card is the only one I would even consider. What would anyone using DS do with those others? If unit sales are going to be less, prices may even end up being a little higher because sales are less. Greedflation is likely part of the price - No reason to doubt it! But other factors could also play a part. Either way, it seems the whole world is in a different place now than it was before Covid. I doubt it's ever really going back. The rules followed in the "before times" may no longer hold true. We shall see.

    There are PC parts dropping to pre COVID prices. All forms of memory are dropping fast, and memory was one of the factors often cited as causing price increases. But today the extra 8gb is maybe $25, and probably not even that with Nvidia's ability to buy at mass volume and negotiate better deals. If a big recession hits, any companies still trying to push pre COVID prices are going to be in for a very bad time. It is better to sell something than nothing.

    All GPUs have stopped selling. You can now find all the new GPUs being sold under MSRP in places. Even the 4090 can be found under its MSRP, indicating its demand has been met. After all, if you put something on sale, the demand is met. The still very new 4070 can also be found under its $600 MSRP in places, and as I said somewhere, reports say Nvidia has actually stopped producing 4070s because of how poorly they are selling. This should be a shock to the system, the 70 class is supposed to be a good seller. 

    So I doubt this will change with the 60 class cards. They might fair better, but not a lot. Gaming is a big driver of these sales, and gamers are just fed up right now. I've been checking out reviews of the 4060ti, and wow they are even more brutal than I expected them to be. The biggest hardware channels blasted the 4060ti 8gb from start to finish. Gamers Nexus called the 4060ti a "waste of sand", a line they used only a couple of times for some really bad products. The negativity has been reflected online. I cannot recall a time when so much reaction has been so widely negative towards Nvidia. Even the 2080ti didn't get this much flack. Gamers are getting tired of it. Some are turning to consoles. The PS5 can be bought for $400, the same price as the 4060ti 8gb. That is a problem, new games are finally being built for "next gen" hardware, and it turns out that 8gb GPUs are having a bad time with these new games. I don't think the 16gb model will fair much better because of its extra $500 price. That price puts it too close to the 4070, and keep in mind the performance gap is massive. The 4070 can be over 50% faster in some games. Even more insulting is that there are games where the old 3060ti actually beats the 4060ti, and overall the 4060ti is way too close to the 3060ti in performance. That is absolutely ridiculous. The 3060ti should never be able to beat the new 4060ti. That is frankly insulting.

    I am not saying it will be a bad Iray card, but you might want to just wait a few weeks after launch. The 4070 started going on sale just a week after launch. Since the 16gb model almost certainly has a better margain, it might even go on sale faster than the 8gb model. I would not be surprised to see the 16gb model drop to $450 within the first month of its launch. So my advice to people who are not in hurry is to just wait a bit longer, not even that long, before deciding to grab one. They should be easy to get. Skipping Lovelace isn't a bad idea, either. Waiting usually doesn't hurt, as long as there isn't a sudden crypto boom. But it will probably be 2 years before we get a 5060 type card, since they always wait a while before launching them. So I cannot blame anyone for picking up the 4060ti 16gb. I kind of imagine that most of the 4060ti 16gb buyers will be people like us rather than gamers. It will be interesting to see what its sales figures are.

    Also, there will be no Nvidia Founder's version of the 16gb model. I find that odd, as the 8gb model has a FE. So that leaves the 16gb model to the 3rd parties. 

  • PadonePadone Posts: 3,700
    edited May 2023

    @PertttiA If you don't connect the gpu to the monitor then the used vram is zero until you render. Meaning the card is reserved to iray. The viewport doesn't allocate anything unless you use the iray preview. Then how much iray allocates depends on many factors including the render settings, the card model, the iray version and the nvidia driver version.

    @nonesuch00 Yes that is exactly my point. It doesn't matter how much vram you have, daz studio is practically unusable if you don't get the scene optimizer to reduce textures. Having more vram helps of course, but only if you first use the scene optimizer. Then personally I use blender with the simplify options that's basically the scene optimizer for blender.

    https://www.daz3d.com/scene-optimizer

    Post edited by Padone on
  • PerttiAPerttiA Posts: 10,024

    Padone said:

    @PertttiA If you don't connect the gpu to the monitor then the used vram is zero until you render. Meaning the card is reserved to iray. The viewport doesn't allocate anything unless you use the iray preview. Then how much iray allocates depends on many factors including the render settings, the card model, the iray version and the nvidia driver version.

     

    So, show us a DS 4.15 log that gives amount of VRAM used for geometry and textures while rendering, together with the total amount of VRAM used at the same time from GPU-Z. 

  • doubledeviantdoubledeviant Posts: 1,140
    I'd guess that most here are primarily interested in desktops (for greater power, upgradability, etc), but for reasons beyond Daz, I prefer to have a laptop, and now seems to be a good time to buy: I'm seeing discounts of $500-$800 (perhaps due to all the buzz about the new cards), and some very nice models with a 16GB 3080 Ti are now about half the price of newer machines with a 16GB 4090.

    The newer card is quite nice, but at these prices, I'm thinking that the older card is the better upgrade from my current 2070 SUPER, despite not being top of the line for Daz.
  • PerttiAPerttiA Posts: 10,024

    doubledeviant said:

    I'd guess that most here are primarily interested in desktops (for greater power, upgradability, etc), but for reasons beyond Daz, I prefer to have a laptop, and now seems to be a good time to buy: I'm seeing discounts of $500-$800 (perhaps due to all the buzz about the new cards), and some very nice models with a 16GB 3080 Ti are now about half the price of newer machines with a 16GB 4090.

    The newer card is quite nice, but at these prices, I'm thinking that the older card is the better upgrade from my current 2070 SUPER, despite not being top of the line for Daz.

    The only problem with RTX 2070 Super is not enough VRAM. I had one before I upgraded to RTX 3060 12GB, fully aware that rendering speed would not improve much if any, but the 8GB VRAM was starting to limit what I could do.

    If the laptop does have a good nVidia RTX GPU, the remaining problem will be overheating, as with anything 'heavy' one does with laptops, and the lack of screen space for my taste (got DS spread over 3 monitors myself)

  • PerttiAPerttiA Posts: 10,024

    Checked the local shop. RTX 4060 Ti, sales will start in one hour and the cheapest one costs 460 Eur (including 24% VAT) (Asus and MSI)

    For comparison, the Asus GeForce DUAL-RTX3060-O12G-V2 is listed at 390Eur (including 24% VAT)

  • doubledeviantdoubledeviant Posts: 1,140
    PerttiA said:

    The only problem with RTX 2070 Super is not enough VRAM. I had one before I upgraded to RTX 3060 12GB, fully aware that rendering speed would not improve much if any, but the 8GB VRAM was starting to limit what I could do.

    If the laptop does have a good nVidia RTX GPU, the remaining problem will be overheating, as with anything 'heavy' one does with laptops, and the lack of screen space for my taste (got DS spread over 3 monitors myself)

    Yeah, I'm presently interested in "more things per render" rather than rendering speed. Double the amount of VRAM is the primary driver of wanting to upgrade. Not just for rendering, but for gaming, too.

    This MSI Stealth ($750 off) and this MSI Raider ($900 off) were my top choices, both about half the cost (after taxes) of the newer models with 4090s. Granted, some of that difference is additional RAM, better processor, etc, but still: Too big a gap in price for me to find opting for the newer card attractive.
  • PerttiAPerttiA Posts: 10,024

    doubledeviant said:

    PerttiA said:

    The only problem with RTX 2070 Super is not enough VRAM. I had one before I upgraded to RTX 3060 12GB, fully aware that rendering speed would not improve much if any, but the 8GB VRAM was starting to limit what I could do.

    If the laptop does have a good nVidia RTX GPU, the remaining problem will be overheating, as with anything 'heavy' one does with laptops, and the lack of screen space for my taste (got DS spread over 3 monitors myself)

    Yeah, I'm presently interested in "more things per render" rather than rendering speed. Double the amount of VRAM is the primary driver of wanting to upgrade. Not just for rendering, but for gaming, too.


    This MSI Stealth ($750 off) and this MSI Raider ($900 off) were my top choices, both about half the cost (after taxes) of the newer models with 4090s. Granted, some of that difference is additional RAM, better processor, etc, but still: Too big a gap in price for me to find opting for the newer card attractive.

    For rendering Iray in DS, a 12GB GPU already doubles the available VRAM and a 16GB GPU triples it, as about 4GB's of VRAM is taken by the baseloads of W10, DS, the scene and the necessary 'working space', leaving an 8GB GPU with only 4GB's for geometry and textures. 

  • davesodaveso Posts: 7,014

    .3080ti is $1800 above my price point. We will see where these 4060 systems price out at. 

  • PadonePadone Posts: 3,700
    edited May 2023

    @PerttiA You can't trust the iray log, that's probably why nvidia removed the vram usage from the log. Just use the windows task manager to get the vram usage. As per your request of showing the vram usage while rendering, as already explained that depends on many factors and is not relevant. What's relevant is that the vram is zero until you render.

    Post edited by Padone on
  • PerttiAPerttiA Posts: 10,024

    Padone said:

    @PerttiA You can't trust the iray log, that's probably why nvidia removed the vram usage from the log. Just use the windows task manager to get the vram usage. As per your request of showing the vram usage while rendering, as already explained that depends on may factors and anyway is not relevant. What's relevant is that the vram is zero until you render.

    When you only look at the total VRAM usage, you don't know what that VRAM is used for, does windows jump in and take it's part, certainly the working space is taken from the VRAM of the rendering GPU. If one doesn't know how much VRAM the textures and geometry are taking, one can't know how much is available for them.

    There are no indications that the information in the DS log is not accurate. For textures they are showing VRAM usage of about 50% of how much those textures are using RAM, and that is how much the default Iray compression settings seem to reduce the memory footprint for textures.

  • savagestugsavagestug Posts: 172

    PerttiA said:

    doubledeviant said:

    I'd guess that most here are primarily interested in desktops (for greater power, upgradability, etc), but for reasons beyond Daz, I prefer to have a laptop, and now seems to be a good time to buy: I'm seeing discounts of $500-$800 (perhaps due to all the buzz about the new cards), and some very nice models with a 16GB 3080 Ti are now about half the price of newer machines with a 16GB 4090.

    The newer card is quite nice, but at these prices, I'm thinking that the older card is the better upgrade from my current 2070 SUPER, despite not being top of the line for Daz.

    The only problem with RTX 2070 Super is not enough VRAM. I had one before I upgraded to RTX 3060 12GB, fully aware that rendering speed would not improve much if any, but the 8GB VRAM was starting to limit what I could do.

    If the laptop does have a good nVidia RTX GPU, the remaining problem will be overheating, as with anything 'heavy' one does with laptops, and the lack of screen space for my taste (got DS spread over 3 monitors myself)

    I went from a 2070 S to a 3060 and saw an 8 - 12% decrease in render times, across the 10 scenes I benchmarked before and after the upgrade.

  • outrider42outrider42 Posts: 3,679

    That seems a bit odd, none of the 2070 Supers shown in the benchmark thread were faster than the 3060s. The 3060 is even flirting with 2080ti times. I am not saying I doubt your results, but I am curious what is different in the scenes you benchmarked, if that was the same version of Daz, and if you tried the benchmark thread scene with the 2 cards.

  • jd641jd641 Posts: 458
    edited May 2023

    outrider42 said:

    That seems a bit odd, none of the 2070 Supers shown in the benchmark thread were faster than the 3060s. The 3060 is even flirting with 2080ti times. I am not saying I doubt your results, but I am curious what is different in the scenes you benchmarked, if that was the same version of Daz, and if you tried the benchmark thread scene with the 2 cards.

    That's what he's saying though, an increase would mean his renders took longer but a decrease in render time means they were faster with the 3060.

    Post edited by jd641 on
  • KitsumoKitsumo Posts: 1,216
    edited May 2023

    PerttiA said:

    Padone said:

    @PerttiA You can't trust the iray log, that's probably why nvidia removed the vram usage from the log. Just use the windows task manager to get the vram usage. As per your request of showing the vram usage while rendering, as already explained that depends on may factors and anyway is not relevant. What's relevant is that the vram is zero until you render.

    When you only look at the total VRAM usage, you don't know what that VRAM is used for, does windows jump in and take it's part, certainly the working space is taken from the VRAM of the rendering GPU. If one doesn't know how much VRAM the textures and geometry are taking, one can't know how much is available for them.

    There are no indications that the information in the DS log is not accurate. For textures they are showing VRAM usage of about 50% of how much those textures are using RAM, and that is how much the default Iray compression settings seem to reduce the memory footprint for textures.

     What, you mean your system doesn't show you that?smiley

    daz studio render

    I kid, but it would be nice to know the texture / geometry figures. I didn't realize they took that out.

    Screenshot_20230525_083047-1.png
    1920 x 1080 - 2M
    Post edited by Kitsumo on
  • PerttiAPerttiA Posts: 10,024
    edited May 2023

    Kitsumo said:

    PerttiA said:

    When you only look at the total VRAM usage, you don't know what that VRAM is used for, does windows jump in and take it's part, certainly the working space is taken from the VRAM of the rendering GPU. If one doesn't know how much VRAM the textures and geometry are taking, one can't know how much is available for them.

    There are no indications that the information in the DS log is not accurate. For textures they are showing VRAM usage of about 50% of how much those textures are using RAM, and that is how much the default Iray compression settings seem to reduce the memory footprint for textures.

    What, you mean your system doesn't show you that?smiley

    I kid, but it would be nice to know the texture / geometry figures. I didn't realize they took that out.

    Even that didn't tell, how much was used for what purpose. ie. how much is available for geometry and textures, which is the information that's important.

    Did this when I was still using RTX 2070 super, taking notes at each step of the RAM and VRAM usage together with the information from the Daz log, I was able to figure out how much VRAM was actually available for the geometry and textures.

    Case a) just one lightweight G8 figure with lightweight clothing and hair
    Case b) four similar G8 characters with architecture
    Case c and d) started increasing SubD on the characters to see at which point the rendering would drop to CPU

    "RAM/GB" and "VRAM/MB" taken from GPU-Z, "DS Log/GiB" taken from DS Log, no other programs were running but DS and GPU-Z
    The "DS Log/GiB" is the sum of Geometry usage, Texture usage and Working Space - After Geometry and Textures, there should still be at least a Gigabyte of VRAM available for the Working space => In my case, Geometry + Textures should not exceed 4.7GiB (800MB's less on W10)

    Test was made using RTX 2070 Super (8GB), i7-5820K, 64GB's of RAM on W7 Ultimate and DS 4.15

    When a device that's doing nothing is not having it's VRAM reserved for anything, tells nothing about how much VRAM is used for whatever when that device is actually used for something (Iray rendering), and for one to be able to see how much of the total usage was from something else than geometry and textures, we would need to know the amount the geometry and textures are taking.
    Sure, one can calculate the total usage of the textures and maps based on how much the textures and maps are using RAM, but for that one would need to check the dimensions and color depth of each and every image file first and even that doesn't show the amount of memory the geometry is taking.

    I would do the test but 1.) I have never accepted motherboards with integrated GPU's and 2.) Im still running W7

    RenderTst.png
    615 x 574 - 42K
    Post edited by PerttiA on
  • KitsumoKitsumo Posts: 1,216

    PerttiA, I'm not really following. So is this all about finding out whether Windows 10 is using extra VRAM from the Nvidia gpus while integrated graphics are used for display?

  • PerttiAPerttiA Posts: 10,024

    Kitsumo said:

    PerttiA, I'm not really following. So is this all about finding out whether Windows 10 is using extra VRAM from the Nvidia gpus while integrated graphics are used for display?

    No, it's about finding out, how much VRAM is available for rendering (specifically geometry and textures) on a nVidia GPU that is not connected to monitors. On W10 because that is the OS most are using.

    Even if I put the old RTX 2070 Super back to drive the monitors and did the rendering on my RTX 3060 12GB, that wouldn't be the same as W7 probably would not work the same as W10 in that case.

  • KitsumoKitsumo Posts: 1,216

    I'm not a Daz developer, but those sound like 2 totally separate things. How much VRAM Win7, Win10 or any other OS makes available to DS is one thing. How much DS allocates specifically for textures and geometry is another. I don't see why the OS would have any control over how a program operates internally.

  • PerttiAPerttiA Posts: 10,024

    Kitsumo said:

    I'm not a Daz developer, but those sound like 2 totally separate things. How much VRAM Win7, Win10 or any other OS makes available to DS is one thing. How much DS allocates specifically for textures and geometry is another. I don't see why the OS would have any control over how a program operates internally.

    Don't get stuck on the OS, the problem is that even when we can monitor the total VRAM usage on the nVidia GPU that's not driving the monitors, we don't know how much of that usage is something else than geometry and textures => We don't know how much of the total VRAM is available for geometry and textures for Iray rendering.

    The test I made on RTX 2070 Super that was driving 3 monitors, using DS 4.15 that does tell how much VRAM is used for geometry and textures, gave the result that geometry and textures could use up to 4.7 GB's out of total 8GB's before the rendering would drop to CPU, later discussions in here revealed that W10 was taking about 1GB of VRAM, where W7 only takes about 200MB's (that's about the differences between OS's).
    The above proved that rendering with the same nVidia GPU that's driving the monitor(s), the VRAM available for geometry and textures on W10 is about 4GB's less than the total installed VRAM on the GPU.

    What we would like to know is, if the monitors are being driven by some other GPU and the nVidia GPU is used only for rendering, how much of the installed VRAM is available for geometry and textures. This test should be done on W10 as it would serve larger user base.
    That much can be predicted that the 'working space' will be reserved on the GPU that's used for rendering, but we don't know if there are other loads using the VRAM during rendering as well, for that we would need someone with W10, using DS 4.15 doing the test and telling us, how much of the total VRAM usage was used for geometry and textures => We would have the figures to calculate the load that's not geometry or textures.

     

  • kwerkxkwerkx Posts: 105

    Long thread.. has anyone addressed the bus size yet?

    LTT did a video talking about the 4060 VRAM bus size being smaller.  Tom's shows the new 4060 16GB has a 128 bit VRAM bus (https://www.tomshardware.com/reviews/nvidia-geforce-rtx-4060-ti-review) and the old 3060 16GB has a 192 bit VRAM bus (https://www.tomshardware.com/reviews/nvidia-geforce-rtx-3060-review).  The 4060 has new tech (bigger cache etc); but I get the feeling (guess) that we load the VRAM up with very little swapping in and out.. which suggests that loading up 12GB of VRAM prior to render will take longer on the new 4060.

    Curious to see what the acutal usage shows.  Most reviews I see are geared towards gaming, streaming, or video creation.. Our.. Big VRAM to fit our render onto the GPU and use GPU cores that, even if they are old, are typically faster than using the CPU.. use case is a bit niche.  

  • PadonePadone Posts: 3,700

    @PerttiA Your considerations don't make sense to me. Why windows 10 would ever "jump in" into rendering. If the allocted vram is zero until you render, that's proof that windows takes nothing.

  • PerttiAPerttiA Posts: 10,024

    Padone said:

    @PerttiA Your considerations don't make sense to me. Why windows 10 would ever "jump in" into rendering. If the allocted vram is zero until you render, that's proof that windows takes nothing.

    So tell me, what's the maximum amount of VRAM the geometry and textures can use on your GPU before the rendering drops to CPU - That is the question.

    As long as the GPU is not used to drive the monitors or to render, it is just a piece of useless hardware as far as the OS is concerned, but we don't know what happens when one starts using it.
    If we knew how much of the total usage of VRAM is taken by the geometry and textures, we could calculate the load that's not geometry and textures, what that other load is and where it comes from is not even important, we just need to know how much it is.

  • oddboboddbob Posts: 396

    kwerkx said:

    Long thread.. has anyone addressed the bus size yet?

    LTT did a video talking about the 4060 VRAM bus size being smaller.  Tom's shows the new 4060 16GB has a 128 bit VRAM bus (https://www.tomshardware.com/reviews/nvidia-geforce-rtx-4060-ti-review) and the old 3060 16GB has a 192 bit VRAM bus (https://www.tomshardware.com/reviews/nvidia-geforce-rtx-3060-review).  The 4060 has new tech (bigger cache etc); but I get the feeling (guess) that we load the VRAM up with very little swapping in and out.. which suggests that loading up 12GB of VRAM prior to render will take longer on the new 4060.

    Curious to see what the acutal usage shows.  Most reviews I see are geared towards gaming, streaming, or video creation.. Our.. Big VRAM to fit our render onto the GPU and use GPU cores that, even if they are old, are typically faster than using the CPU.. use case is a bit niche.  

    The cards are also pcie 4.0 x 8 which may have a negative effect when used with a pcie 3.0 motherboard

  • KitsumoKitsumo Posts: 1,216

    kwerkx said:

    Long thread.. has anyone addressed the bus size yet?

    LTT did a video talking about the 4060 VRAM bus size being smaller.  Tom's shows the new 4060 16GB has a 128 bit VRAM bus (https://www.tomshardware.com/reviews/nvidia-geforce-rtx-4060-ti-review) and the old 3060 16GB has a 192 bit VRAM bus (https://www.tomshardware.com/reviews/nvidia-geforce-rtx-3060-review).  The 4060 has new tech (bigger cache etc); but I get the feeling (guess) that we load the VRAM up with very little swapping in and out.. which suggests that loading up 12GB of VRAM prior to render will take longer on the new 4060.

    Curious to see what the acutal usage shows.  Most reviews I see are geared towards gaming, streaming, or video creation.. Our.. Big VRAM to fit our render onto the GPU and use GPU cores that, even if they are old, are typically faster than using the CPU.. use case is a bit niche.  

     There have been a couple of threads that address it (here & here) but they're both pretty old, before 12Gb cards became common. I think the biggest difference I got was about 3 seconds between PCI-E 1x and 4x, and that was for a 4Gb card so you could try to extrapolate that up to 12Gb, BUT that was on an i5 6600 machine, so I'm guessing it was one of the older PCI-E versions.

    If I decide to get a 4060ti-16gb, I may run those tests over again, but I'm definitely going to wait and see if prices drop after launch.

  • KitsumoKitsumo Posts: 1,216

    PerttiA said:

    Kitsumo said:

    I'm not a Daz developer, but those sound like 2 totally separate things. How much VRAM Win7, Win10 or any other OS makes available to DS is one thing. How much DS allocates specifically for textures and geometry is another. I don't see why the OS would have any control over how a program operates internally.

    Don't get stuck on the OS, the problem is that even when we can monitor the total VRAM usage on the nVidia GPU that's not driving the monitors, we don't know how much of that usage is something else than geometry and textures => We don't know how much of the total VRAM is available for geometry and textures for Iray rendering.

    The test I made on RTX 2070 Super that was driving 3 monitors, using DS 4.15 that does tell how much VRAM is used for geometry and textures, gave the result that geometry and textures could use up to 4.7 GB's out of total 8GB's before the rendering would drop to CPU, later discussions in here revealed that W10 was taking about 1GB of VRAM, where W7 only takes about 200MB's (that's about the differences between OS's).
    The above proved that rendering with the same nVidia GPU that's driving the monitor(s), the VRAM available for geometry and textures on W10 is about 4GB's less than the total installed VRAM on the GPU.

    What we would like to know is, if the monitors are being driven by some other GPU and the nVidia GPU is used only for rendering, how much of the installed VRAM is available for geometry and textures. This test should be done on W10 as it would serve larger user base.
    That much can be predicted that the 'working space' will be reserved on the GPU that's used for rendering, but we don't know if there are other loads using the VRAM during rendering as well, for that we would need someone with W10, using DS 4.15 doing the test and telling us, how much of the total VRAM usage was used for geometry and textures => We would have the figures to calculate the load that's not geometry or textures.

     

    I guess I see where you're going. Initially, I thought this was about determining how much VRAM is available to each GPU - I'm happy to have that discussion.

    But I can confidently say (with all due respect) that I'm not concerned with how much DS has available for geometry and textures vs how much goes to overhead or whatever. That's not my concern as a user. I don't do it for Daz Studio, or Blender or Stable Diffusion, GTA V, Wreckfest, OpenOffice Calc, or whatever. I don't worry about how a program is allocating it's available VRAM (or RAM). As long as the OS gives it what it needs, I'm happy, but of course we can agree to disagree.

  • PerttiAPerttiA Posts: 10,024
    edited May 2023

    Kitsumo said:

    I don't worry about how a program is allocating it's available VRAM (or RAM). As long as the OS gives it what it needs, I'm happy, but of course we can agree to disagree.

    The OS can not 'give' something that's not there.

    Now that we are finally getting GPU's with 12GB's and 16GB's of VRAM, it is becoming less of an issue, but when one has a GPU with 8GB's or less, knowing that the OS, DS, the scene, and the working space take 4GB's of that VRAM first and the rest is available for geometry and textures, which are the two things that need that VRAM when rendering Iray. That is space one cannot extend by any other means than buying another GPU with more VRAM.

    For reference, one G8 character with clothing and hair can easily use 1.5GB's of VRAM for geometry and textures without the scene having anything else on it, so the issue becomes what can one load into the scene before the GPU runs out of space for the geometry and textures.

    It would be quite useful to know, how much of the VRAM on a nVidia GPU that isn't connected to monitors, is used for other load than geometry and textures, because some of it is for sure used for the working space which is about 1.9GB's with the recent versions of DS, but is there something else and how much?

    Post edited by PerttiA on
  • outrider42outrider42 Posts: 3,679

    oddbob said:

    kwerkx said:

    Long thread.. has anyone addressed the bus size yet?

    LTT did a video talking about the 4060 VRAM bus size being smaller.  Tom's shows the new 4060 16GB has a 128 bit VRAM bus (https://www.tomshardware.com/reviews/nvidia-geforce-rtx-4060-ti-review) and the old 3060 16GB has a 192 bit VRAM bus (https://www.tomshardware.com/reviews/nvidia-geforce-rtx-3060-review).  The 4060 has new tech (bigger cache etc); but I get the feeling (guess) that we load the VRAM up with very little swapping in and out.. which suggests that loading up 12GB of VRAM prior to render will take longer on the new 4060.

    Curious to see what the acutal usage shows.  Most reviews I see are geared towards gaming, streaming, or video creation.. Our.. Big VRAM to fit our render onto the GPU and use GPU cores that, even if they are old, are typically faster than using the CPU.. use case is a bit niche.  

    The cards are also pcie 4.0 x 8 which may have a negative effect when used with a pcie 3.0 motherboard

    Bus speed has very little impact on Iray. It might have an effect when using multiple rendering devices but that hasn't been tested enough. Rendering with a single GPU shouldn't make a real noticeable difference. And it certainly would not be enough of a difference to ignore the card or build a new PC. I think most people would accept letting a couple percentage points go if their heart is set on the extra VRAM.

    The bus speed can hit other programs that rely on it, though. Gaming can be hit by it, but the question of how much it really hurts is hard to say. The 6500xt was severely hampered by its lack of bus, but that was x4. At x8 the 4060ti is still in a much better position in that regard.

    On the flip side Lovelace has a lot more L2 cache, and any software that is sensitive to cache will benefit greatly by the extra amount. Ideally this balances the bus speed, but it really depends on the software. You just have to look for reviews that use the specific software you prefer.

    With AV1 encoding and other features, the 16gb 4060ti could appeal to video editors on a budget. It certainly going to be an odd card when the 4080 is the only other 16gb option available.

    Having said that, the news just keeps getting worse for the 4060ti 8gb. There are people saying that preorder demand is very low, even nonexistant. This is for the 8gb model, but again I don't see the 16gb model burning up the charts. There are also some reports that stock is going to be low on purpose. This could pose a problem for potential discounts, as low stock would help curtail discounts. Even so I still think the 16gb model may get some kind of discount shortly after its launch.

Sign In or Register to comment.