I have the 4060TI 16GB and it's OK

124»

Comments

  • I'm not technically savy when it comes to graphic cards and come here for advice when I buy one. But I saw a youtube video where a 3090 preview renders more or less in realtime 

    but I can't find a 3090 pc near me. Plenty of 4060s though. Can I achieve the same result with them?  

  • ArtiniArtini Posts: 9,462

    There is a big difference between these 2 cards

    GeForce RTX 3090
    NVIDIA CUDA® Cores: 10496
    24 GB GDDR6X
    Memory Interface Width: 384-bit

    GeForce RTX 4060 Ti
    NVIDIA CUDA® Cores: 4352
    16 GB GDDR6
    Memory Interface Width: 128-bit

  • jparks123jparks123 Posts: 37
    edited August 2023

    bobety316_c50224ad1f said:

    I'm not technically savy when it comes to graphic cards and come here for advice when I buy one. But I saw a youtube video where a 3090 preview renders more or less in realtime 

    but I can't find a 3090 pc near me. Plenty of 4060s though. Can I achieve the same result with them?  

    How is this even close to realtime? If you want to call that realtime, you can get pretty much any GPU with sufficient VRAM  and not notice the difference.

    The 4060 has only 8 GB of memory and is not recommended.

    The 4060 TI is about the same speed as a 3070 but costs more but has more VRAM.

    Decide how much VRAM you need - 12, 16 or more. Only consider a 4060 TI if you need more than 12. Games generally don'y (yet) but Daz can consume large amounts of VRAM if you let it.

     

    Post edited by jparks123 on
  • ArtiniArtini Posts: 9,462
    edited August 2023

    For realtime renders, I use Unity game engine, but the quality of the renders differs from iray.

    I have to accept, that if I want speedy renders, the look of the renders will be different.

    The other possibility is to use on-line rendering, but it costs a lot.

    Post edited by Artini on
  • marblemarble Posts: 7,500
    edited August 2023

    So I now have the 4080 installed and it wasn't as straightforward as I thought it would be. The new 4080 comes with a different PCIE power connector that requires an adapter from NVidia which splits into three individual sockets which can accommodate 3 plugs from the PSU. My 850W PSU only had one cable/connector for PCIE which had me scratching my head. That cable had a short extension with another set of plug connectors but that still didn't make three.

    I did a bit of YouTube research and discovered that it is important to have each of those three connectors supplied from separate ports on the PSU. I scratched around in the boxes under my bed and founf the missig PCIe connector cables so I was eventually able to connect the 4080 as advised. 

    Most of you probably know these details aleady but for those how don't - this is a just a heads-up info.

    Post edited by marble on
  • edited August 2023

    jparks123 said:

    bobety316_c50224ad1f said:

    I'm not technically savy when it comes to graphic cards and come here for advice when I buy one. But I saw a youtube video where a 3090 preview renders more or less in realtime 

    but I can't find a 3090 pc near me. Plenty of 4060s though. Can I achieve the same result with them?  

    How is this even close to realtime? If you want to call that realtime, you can get pretty much any GPU with sufficient VRAM  and not notice the difference.

    The 4060 has only 8 GB of memory and is not recommended.

    The 4060 TI is about the same speed as a 3070 but costs more but has more VRAM.

    Decide how much VRAM you need - 12, 16 or more. Only consider a 4060 TI if you need more than 12. Games generally don'y (yet) but Daz can consume large amounts of VRAM if you let it.

    Well it would be good enough for me! When I do the same thing as that video with my 3060 it takes about 20 seconds longer

    Post edited by Richard Haseltine on
  • kyoto kidkyoto kid Posts: 41,057
    edited August 2023

    ...I remember years ago when a PA here posted a video showing Iray View mode on a system that had 3 Titan-X's and it was pretty darn close to that performance. True only 12 instead of 24 GB but still pretty impressive.

    Post edited by kyoto kid on
  • oddboboddbob Posts: 396

    marble said:

    So I now have the 4080 installed and it wasn't as straightforward as I thought it would be. The new 4080 comes with a different PCIE power connector that requires an adapter from NVidia which splits into three individual sockets which can accommodate 3 plugs from the PSU. My 850W PSU only had one cable/connector for PCIE which had me scratching my head. That cable had a short extension with another set of plug connectors but that still didn't make three.

    I did a bit of YouTube research and discovered that it is important to have each of those three connectors supplied from separate ports on the PSU. I scratched around in the boxes under my bed and founf the missig PCIe connector cables so I was eventually able to connect the 4080 as advised. 

    Most of you probably know these details aleady but for those how don't - this is a just a heads-up info.

    If you don't like the rat's nest of cables that the adaptor causes there are direct PSU to GPU cables available from Corsair and BeQuiet for some of their PSUs. There are also third party cables available from the likes of Cablemod.

  • outrider42outrider42 Posts: 3,679

    bobety316_c50224ad1f said:

    jparks123 said:

    bobety316_c50224ad1f said:

    I'm not technically savy when it comes to graphic cards and come here for advice when I buy one. But I saw a youtube video where a 3090 preview renders more or less in realtime 

    but I can't find a 3090 pc near me. Plenty of 4060s though. Can I achieve the same result with them?  

    How is this even close to realtime? If you want to call that realtime, you can get pretty much any GPU with sufficient VRAM  and not notice the difference.

    The 4060 has only 8 GB of memory and is not recommended.

    The 4060 TI is about the same speed as a 3070 but costs more but has more VRAM.

    Decide how much VRAM you need - 12, 16 or more. Only consider a 4060 TI if you need more than 12. Games generally don'y (yet) but Daz can consume large amounts of VRAM if you let it.

    Well it would be good enough for me! When I do the same thing as that video with my 3060 it takes about 20 seconds longer

    Keep in mind what is in this video, and more importantly, what is NOT in this video. People! Where are the people??? Loading up skin shaders takes a moment longer than other shaders. He is also using the denoiser, which if you notice is set to start at 0 iterations, meaning the denoiser kicks in immediately. The denoiser is doing a lot of work here, that is why stuff looks smeared for several seconds.

    I just did some testing. I loaded up some small office. I did the Iray preview with my 3090, and switched over to my 3060. BOTH GPUs could start showing a picture instantly. The denoiser kicked in, and it looked like an 1980's music video for a few seconds. The 3090 cleaned up the image faster than the 3060, obviously, but my point is that both GPUs started rendering instantly with Iray preview.

    Then I loaded a person from a scene subset and tried the Iray viewport again. This time it took about 3 or 4 seconds for the image to start rendering. And again, this 3-4 seconds was the same whether I used the 3060 or 3090. The 3090 also again showed its muscle by cleaning it up faster. One thing I noticed was that the person in the scene was much less resolved than the objects in the scene. The face was very muddy for a good while, with no real detail. I wouldn't be able to identify this character going off this image.

    So I have to conclude you have something in your scenes (probably people) clogging up this process. That or your memory is really slow moving the data to GPU. There are all kinds of things that can be slowing this down.

    So again, I have both of these GPUs. I can tell you from my own experience. Of course the 3090 renders faster once it gets going, but you still got to load that data into the 3090 first. If you have a bunch of stuff in your scene with complicated shaders, that process will still take some time to happen. Even a mighty 3090 can only do so much with this specific aspect.

    If you play video games, you may be familiar with the term "shader compilation". Shader compilation is currently one of the most irritating things in gaming. Regardless of how awesome your hardware is, you can still get shader compliation stutter during gameplay when something new needs to load. This is due to the game engine and game design. What we are seeing here with Iray is basically shader compilation. It takes some time for the GPU to load the scene and all the shaders. That is just how it is.

  • kyoto kidkyoto kid Posts: 41,057

    ...even my old Titan-X isn't a total slouch Not s fast but I don't have to go off and do other things while I'm waiting. I often switch to Iray view when working on characters as I make adjustments to skin and hair shaders (the latter with Slosh's UHT 2 and ChevyBabe's Backlight utilities) and the resolve fairly well.  Iray View is also helpful when  creating skins with Skin Builder..  However, I have nothing else in the scene and only use a basic photo studio light HDR.

    Can't wait until I can finally upgrade and see how that 3060 performs.

  • outrider42outrider42 Posts: 3,679

    Titans often have the largest bus sizes, and so can fill their memory up faster. But the new cards have faster memory chips. The 3060 actually has more bandwidth than the old Titan. Other things can impact this, too, from how fast the CPU amd RAM can get that data into VRAM. So it will be interesting to see if the viewport starts up much faster. But once the viewport does start it will clean up faster than you are used to. You can also the denoiser for the viewport, and I think it is helpful for this, though you may not. Like in the video above, you get the funky paint like smears going on while it cleans up the image. Some people might be ok with this, some might hate it. You can disable the denoiser before doing your full render.

    If that is the Maxwell Titan, then yeah, a 3060 will be quite a big upgrade in every way. It can render almost twice as fast as the Pascal 1080ti, which itself is a big bump over the former Titan. I think the 1080ti is like 1.5 times or more faster than Titan Maxwell. You could be looking at a 3x to even 4x jump depending on what is in the scene. The more complex the geometry, the bigger the gap gets.

    Which is people are wondering, that is how the RT cores boost rendering. They handle complex geometry significantly better than pure CUDA, and the more geometry you have, the bigger the performance gap. The old thread that had dforce strand hair as a benchmark showed this off really well, with the RTX powered cards putting up crazy iterations rates compared to non RTX. Rates that were many times faster than GTX. That thread kind of died, which is sad, because it would be cool to see what the 4000 series can do with that scene.

    Each generation the RT cores have made big strides, though the rate of advancement on RT seems to be slowing down.

    It stands to reason that large VRAM hogging scenes likely have a lot geometry in them, too, though textures can fill up the memory as well. But the performance gap between generations could grow wider in these large scenes.

    Aha, I found the thread. Keep in mind the last post was way back in 2021, and funny enough the last post is mine, lol. So all the numbers in that thread are with previous versions of Iray. This could be a chance to see if the newer Iray has indeed improved here. We all know that the radiant bench is slower with Daz 4.20+, but the Iray dev team claims that they have made improvements. So perhaps I will give this old scene a test with my 3090 and 3060 to see how it stands up. I still have my trusty Daz 4.16, and I have 4.21 in beta form. 

    https://www.daz3d.com/forums/discussion/344451/rtx-benchmark-thread-show-me-the-power/p1

    It would be cool if a 4000 series owner tested this bench, too. I think we might be surprised, in a good way. I hope. Perhaps the 4060ti can flex better here.

  • FSMCDesignsFSMCDesigns Posts: 12,755

    With all this GPU talk, if you are able to afford a new one, now is the time to buy since nVidia is now focused on the AI boom and not the GPU market, which means less stock, competitive pricing and less innovation. Also it looks like with the AI boom, everyone is going to start buying up all the GPUs like they did for the Crypto mining craze, so stock will be scarce and prices skyrocketing unless they find another way to train the AI. Might be time for DAZ to find another render engine option for DS since nVidia is more interested in AI

  • outrider42 said:

    https://www.daz3d.com/forums/discussion/344451/rtx-benchmark-thread-show-me-the-power/p1

    It would be cool if a 4000 series owner tested this bench, too. I think we might be surprised, in a good way. I hope. Perhaps the 4060ti can flex better here.

    Link for test scene is dead.

     

  • outrider42outrider42 Posts: 3,679

    The AI boom is absolutely an issue, and the entire reason why Nvidia is acting the way it is. If there was no push for AI then Nvidia would need to compete somewhere else for money...like gaming. The 4060ti would not be $500 in such a world. It probably wouldn't even exist in this form. The 4000 series as whole would probably look quite different.

    But I don't think we are seeing AI people snatch up gaming GPUs like crypto did. That simply cannot happen. At its peak, every GPU printed money, and there was essentially no limit at all to how many GPUs you could throw at mining. They were buying up old power plants to power these huge farms with thousands of GPUs. That is how insane it got. 

    However AI has a ceiling. What we are seeing right now is a race for compute power for each company to get its own AI tech out the door. But that race has a limit, and even the most wealthy companies have spending limits. Unlike crypto, each new GPU they add does not equal new money being printed.

    Most of the AI companies are looking for dedicated hardware, like the H100 and similar devices. They are going for over $40,000 a pop. You can't blame Nvidia for focusing their chip production on these. The 4090 might be $1600, but that is a fraction the cost of these AI chips. The AI chips are not much bigger than the 4090 die. There are positives and negatives to this. The market is very segregated, demand for gaming hardware in AI is not the same as crypto was. But this can also mean that Nvidia simply stops making gaming cards for a while as these chase the huge profits on AI chips. At least during crypto booms, they still had to make gaming GPUs. So there is that.

    The low end hardware is not desirable like it could be for mining. With mining the efficiency of balancing power draw and hash rate was king, along with a low price for that hardware. It wasn't a race like AI is right now. Low end GPUs are pretty useless for people doing AI. They can do some stuff, but Nvidia's stinginess with VRAM prevents gaming GPUs from being anything more than a toy to play with some AI generators in their basement, and not a proper tool to get real AI work done. That's the difference. We are not building a render engine, we are just rendering. With AI, the race is to build these engines and other things.

    There is some crossover. Some of the crypto farms that were hanging around have moved on to provide AI compute services. But it has not been anything like cryto. They are not making the money they did when mining was hot.

    In the next year or two, we are going to see a lot of AI start up companies fail. That is just how it goes. There are big winners and big losers. When that happens, the AI race will be largely over. Obviously there will be demand for AI hardware, but nothing like it is now.

    No doubt Nvidia will stay in the AI game, and the AI game is not going away. Don't get me wrong. But the AI game is not going to stay at this level for very long, either. It will come down, and Nvidia will have to focus on the gaming sector again.

  • outrider42outrider42 Posts: 3,679

    jparks123 said:

    outrider42 said:

    https://www.daz3d.com/forums/discussion/344451/rtx-benchmark-thread-show-me-the-power/p1

    It would be cool if a 4000 series owner tested this bench, too. I think we might be surprised, in a good way. I hope. Perhaps the 4060ti can flex better here.

    Link for test scene is dead.

    I uploaded a copy of the scene to ShareCG, and the link is on the last page of that thread. I'll link it here, too.

    https://sharecg.com/v/97260/view/21/DAZ-Studio/Iray-RTX-Benchmark-Scene

  • I installed my new 4060 Ti 16gb today, but before doing so I did a quick test render using my old 3060 (12gb) then the same scene again with the new 4060.

    Now its worth pointing out that this scene maxes the 12gb card out of VRAM if i render it at high res and this is the reason for buying the new card. I can now render the scene at 4k.

    Therefore on the test renders i reduced the resolution to 533 x 800 pixels and let it run for 45 mins. The scene contains a room, lighting, mirrors, a G8 figure with long strand based dforce hair and some dforce clothing.

    Here are the results:
    NVIDIA RTX 3060 12GB  : VRAM use = 11.6 GB during the rendering
    CUDA device 0 (NVIDIA GeForce RTX 3060): 6187 iterations, 5.320s init, 2590.752s render 2.39 = itterations per second
    GPU Temperature 72 degrees C. Ambient temp 20 degrees C

    NVIDIA RTX 4060 Ti 16GB  : VRAM use = 11.5 GB during the rendering
    CUDA device 0 (NVIDIA GeForce RTX 4060 Ti): 7022 iterations, 4.309s init, 2568.136s render 2.73 = itterations per second
    GPU Temperature 55 degrees C. Ambient temp 20 degrees C

    Observations:
    The three fan MSI 4060 is quieter than the 2 fan MSI 3060. The lower temperature is noticable in so far at the temp in the office is reduced with this new card.
    Rendering speed is 12.5% improvement. I bought the 4060 for the additional VRAM which allows me to render more complex scenes, but I was also expecting a bigger imporvement in rendering speed if im honest, what with the more modern architecture and ray tracing cores.

    Final thoughts:
    The 16GB 4080 costs approx £1200 here. Two 4060 ti 16GB's would cost £960.
    The two card config would use approx the same amount of power as the 4080. I know in the game reviews they say the 4080 doesnt give double the performance of a single 4060. So in Daz rendering would 2x 4060ti give double the performance of a single 4060ti and therefore be better and cheaper than a single 4080 setup?

  • outrider42outrider42 Posts: 3,679

    It is important to remember that every scene can give you a different answer. I thought the 4060ti would separate itself more from the 3060 there, as I thought the 3060ti was a good bit faster. That seems odd.

    Iray is not like gaming, so those kinds of benchmarks are of little value. The only number we have is from our benchmark scene and I posted them earlier in this thread. But again, those numbers are not absolute, and can shift depending on what is in the scene. 

    There is also one other thing, according to the Iray Dev Team, Iray is not taking full advantage of the 4000 series as a whole. We suspected as much right from the start, and the dev team has indeed confirmed this. They have promised that Iray 2023 improves Lovelace performance, and all GPUs for that matter. Sadly, we have yet to see Iray 2023 in Daz Studio, an update just released yet without any new Iray.

    At any rate, eventually, some day, we may see the 4000 cards get faster. Of course this also means the 4080 would get faster as well, but at least the 4060ti should further its gap over the 3060.

    I have to say at the very least 17C is massive. Running at 55C is ridiculously low for air cooling. This might also be because the card is not being fully utilized.

  • nick_82e9d0c8 said:

    I installed my new 4060 Ti 16gb today, but before doing so I did a quick test render using my old 3060 (12gb) then the same scene again with the new 4060.

    Now its worth pointing out that this scene maxes the 12gb card out of VRAM if i render it at high res and this is the reason for buying the new card. I can now render the scene at 4k.

    Therefore on the test renders i reduced the resolution to 533 x 800 pixels and let it run for 45 mins. The scene contains a room, lighting, mirrors, a G8 figure with long strand based dforce hair and some dforce clothing.

    Here are the results:
    NVIDIA RTX 3060 12GB  : VRAM use = 11.6 GB during the rendering
    CUDA device 0 (NVIDIA GeForce RTX 3060): 6187 iterations, 5.320s init, 2590.752s render 2.39 = itterations per second
    GPU Temperature 72 degrees C. Ambient temp 20 degrees C

    NVIDIA RTX 4060 Ti 16GB  : VRAM use = 11.5 GB during the rendering
    CUDA device 0 (NVIDIA GeForce RTX 4060 Ti): 7022 iterations, 4.309s init, 2568.136s render 2.73 = itterations per second
    GPU Temperature 55 degrees C. Ambient temp 20 degrees C

    Observations:
    The three fan MSI 4060 is quieter than the 2 fan MSI 3060. The lower temperature is noticable in so far at the temp in the office is reduced with this new card.
    Rendering speed is 12.5% improvement. I bought the 4060 for the additional VRAM which allows me to render more complex scenes, but I was also expecting a bigger imporvement in rendering speed if im honest, what with the more modern architecture and ray tracing cores.

    Final thoughts:
    The 16GB 4080 costs approx £1200 here. Two 4060 ti 16GB's would cost £960.
    The two card config would use approx the same amount of power as the 4080. I know in the game reviews they say the 4080 doesnt give double the performance of a single 4060. So in Daz rendering would 2x 4060ti give double the performance of a single 4060ti and therefore be better and cheaper than a single 4080 setup?

    I happened to purchase same gpu 4060 Ti 16gb, i5 12400, 32 GB Ram however renders are not just crashing but also slowing down my new setup.

    Daz is more of a hobby, I use rhino and daz3d for designing purpose mainly. May i know which game / studio driver you have installed for 4.21 version?

    Shall await your guidance on this part.

     

  • taragemstones7 said:

    nick_82e9d0c8 said:

    I installed my new 4060 Ti 16gb today, but before doing so I did a quick test render using my old 3060 (12gb) then the same scene again with the new 4060.

    Now its worth pointing out that this scene maxes the 12gb card out of VRAM if i render it at high res and this is the reason for buying the new card. I can now render the scene at 4k.

    Therefore on the test renders i reduced the resolution to 533 x 800 pixels and let it run for 45 mins. The scene contains a room, lighting, mirrors, a G8 figure with long strand based dforce hair and some dforce clothing.

    Here are the results:
    NVIDIA RTX 3060 12GB  : VRAM use = 11.6 GB during the rendering
    CUDA device 0 (NVIDIA GeForce RTX 3060): 6187 iterations, 5.320s init, 2590.752s render 2.39 = itterations per second
    GPU Temperature 72 degrees C. Ambient temp 20 degrees C

    NVIDIA RTX 4060 Ti 16GB  : VRAM use = 11.5 GB during the rendering
    CUDA device 0 (NVIDIA GeForce RTX 4060 Ti): 7022 iterations, 4.309s init, 2568.136s render 2.73 = itterations per second
    GPU Temperature 55 degrees C. Ambient temp 20 degrees C

    Observations:
    The three fan MSI 4060 is quieter than the 2 fan MSI 3060. The lower temperature is noticable in so far at the temp in the office is reduced with this new card.
    Rendering speed is 12.5% improvement. I bought the 4060 for the additional VRAM which allows me to render more complex scenes, but I was also expecting a bigger imporvement in rendering speed if im honest, what with the more modern architecture and ray tracing cores.

    Final thoughts:
    The 16GB 4080 costs approx £1200 here. Two 4060 ti 16GB's would cost £960.
    The two card config would use approx the same amount of power as the 4080. I know in the game reviews they say the 4080 doesnt give double the performance of a single 4060. So in Daz rendering would 2x 4060ti give double the performance of a single 4060ti and therefore be better and cheaper than a single 4080 setup?

    I happened to purchase same gpu 4060 Ti 16gb, i5 12400, 32 GB Ram however renders are not just crashing but also slowing down my new setup.

    Daz is more of a hobby, I use rhino and daz3d for designing purpose mainly. May i know which game / studio driver you have installed for 4.21 version?

    Shall await your guidance on this part.

     

    Hi.

    FYI im running Daz 4.21 and using the latest Nvidia studio driver as of 2023-10-03     ie version 537.42

    Im running 64gb system RAM and intel i5-13400 with windows 11.

    Its 100% stable with no problems at all, still very happy and think it was the right decision for my budget and heat/noise limits. Rendering speed seems acceptable to me. I dont use the post denoiser though as that REALLY slows down the renders massivly as soon as it kicks in.

    Currently rendering a scene 4000 pixels wide with 5 x genesis 9 characters (some with dforce hair and clothes) and several light sources and its slow yes (will take 2 or 3 days or more) ... but it fits in GPU ram (just) and uses 40gb system RAM. I have no conerncens regarding stability to leave it rendering for a few days at a time like this, and heat build up is fine. After 16.5 hrs rendering, the GPU temp is stable at 50 degrees.

Sign In or Register to comment.