I have the 4060TI 16GB and it's OK

jparks123jparks123 Posts: 37
edited July 2023 in The Commons

[The 4060TI 16GB] It arrived yesterday.

I have not yet had the time to do a full test, but the performance IN DAZ STUDIO is OK. It is noticably slower than my 4080. A friend with a 3070 spun the viewport around in iRay and felt it it was similar (not a good test, I know, but he can clearly see the 4080 is quicker). The lower memory bandwidth does not probably make a difference in Daz, probably because it doesn't need to update the image 60 times per second.

I feel it's worth it for the VRAM and price combo, if your main use is Daz. Don't get it for gaming, however.

I'll do a full test over the next few days and upload to the benchmarking thread here. https://www.daz3d.com/forums/discussion/341041/daz-studio-iray-rendering-hardware-benchmarking/p1

edit: Unless there is s better benchmark thread?

Post edited by Richard Haseltine on
«134

Comments

  • ArtiniArtini Posts: 9,462

    Great.

    Have you tested it with the Stable Diffusion?

     

  • FSMCDesignsFSMCDesigns Posts: 12,755

    I have seen a few bendhmark videos on the 4060 and while it gives a DS user 16 gb of vram, the card really isn't much better than a 3060

    Artini said:

    Great.

    Have you tested it with the Stable Diffusion?

     

    Not everyone has an interest in AI.

  • savagestugsavagestug Posts: 172

    FSMCDesigns said:

    I have seen a few bendhmark videos on the 4060 and while it gives a DS user 16 gb of vram, the card really isn't much better than a 3060

    Artini said:

    Great.

    Have you tested it with the Stable Diffusion?

     

    Not everyone has an interest in AI.

    I was interested in the 4060 16gb card, but my 3060 died and I couldn't wait (also work remote on my PC), so I'm now on a 4070 12gb. A little bummed, but I do also play a few games, and the preliminary reviews of the 4060 for gaming aren't all that glowing.

  • NylonGirlNylonGirl Posts: 1,817

    FSMCDesigns said:

    I have seen a few bendhmark videos on the 4060 and while it gives a DS user 16 gb of vram, the card really isn't much better than a 3060

    Artini said:

    Great.

    Have you tested it with the Stable Diffusion?

     

    Not everyone has an interest in AI.

    There's nothing wrong with the question.  Some of us are interested in AI.

  • jparks123jparks123 Posts: 37
    edited July 2023

    FSMCDesigns said:

    I have seen a few bendhmark videos on the 4060 and while it gives a DS user 16 gb of vram, the card really isn't much better than a 3060

     

    Where have you seen benchmark videos for iRay or CUDA?

    This site, for Octane Renderer, shows it matches the 3070. https://www.cgdirector.com/octanebench-benchmark-results/

    Video games with raytracing are engineered differently. Do not get the 4060 TI for games :-)

    Post edited by jparks123 on
  • Richard HaseltineRichard Haseltine Posts: 100,948

    Asking about A| use is perfectly legitimate, if it is an aspect you can't answer simply pass over that part of the question.

  • kyoto kidkyoto kid Posts: 41,057
    edited July 2023

    ...nice to get a bit of feedback on this.

    Still thinking about the 4060Ti to get that extra VRAM "overhead" when needed. Half the cost of a 4080 (or an A4000) and about half the power draw which means less strain on the PSU.

    Post edited by kyoto kid on
  • outrider42outrider42 Posts: 3,679

    Yes, if you can, please use the bench in the thread you posted. All of the content for the scene is free with Daz and various Genesis Starter Essentials.

    Toulouse Hair ->free with every Genesis Starter Essentials, the one used in the bench comes from G2F.

    Shadow Thief for Genesis 8 (top) ->free with DS itself, if not installed, search your Daz product library on the website to download.

    Persian Beauty (pants)  ->free with Genesis 2 Female Starter Essentials

    You should have everything else. The shaders used are also included with DS.

  • kyoto kidkyoto kid Posts: 41,057

    ...so the RTX 4090 outscores Nvida's "flagship" 8,000 USD 48 GB RTX 6000 Ada?  Would love to know what criteria was used since the 6000 only shows one benchmark while the 4090 has 306..

  • KitsumoKitsumo Posts: 1,216

    kyoto kid said:

    ...so the RTX 4090 outscores Nvida's "flagship" 8,000 USD 48 GB RTX 6000 Ada?  Would love to know what criteria was used since the 6000 only shows one benchmark while the 4090 has 306..

    I'm not an expert by any means but I think the pro cards have lower heat dissipation/fan setups because they're going to be in a forced air ventilated rackmount setup. Also, they're dual slot cards according to the webpage. And, data center customers are mostly concerned about lower power use (and A/C costs) and VRAM pooling than higher performance per card.

    Meanwhile, the consumer version can have a massive heatsink and take up 3 or 4 slots and extend far enough to block your SATA connectors on your drives. The typical home user is only going to buy one 4090, or maybe two, so they'll adjust their whole setup to accommodate for it.

  • IceCrMnIceCrMn Posts: 2,129
    edited July 2023

    I'm very interested in the Studio test results.

    The 16GB of VRAM is the selling point of the 4060ti for me.

    If the performance is lower than my 306012GB I don't feel the MSRP of $499 (US dollars) can be justified for 4GB more VRAM.

    Post edited by IceCrMn on
  • outrider42outrider42 Posts: 3,679

    kyoto kid said:

    ...so the RTX 4090 outscores Nvida's "flagship" 8,000 USD 48 GB RTX 6000 Ada?  Would love to know what criteria was used since the 6000 only shows one benchmark while the 4090 has 306..

    Clockspeed. The gaming products always have much higher clockspeeds than their pro versions do. For servers stability is king, and so the clockspeed absolutely must be something that can be 100% stable as possible. With video games software, this really is not such an issue, so they safely increase the clockspeeds.

    Everything is tied to clockspeed. It is why gaming cards use so much more electricity than the pro cards. It is why gaming cards have such crazy coolers. Servers also prefer to have their cards more passively cooled because of how they stack them. So again, having a high power hungry clockspeed is less desired.

    The key with any chip is finding the balance for efficiency and power draw. But sometimes gaming cards push things a little farther out the chip's efficiency curve, like most of Ampere. The 3000 series really should not have been so power hungry. 

    If the Ada 6000 could be matched to exact same clockspeeds as the 4090, then the 6000 would win.

    Don't forget you can always tune your GPU. If the power draw is higher then you like, you can adjust it yourself. Of course this can reduce performance, but for some GPUs the performance loss is barely noticeable. That goes back to the efficiency curve.

    Think of GPUs being like cars. Gaming GPUs are basically race cars, while the pro GPUs are perhaps more like trucks or work vans. The trucks have more power, but the race cars are faster. They are tuned for different things. Trucks can haul heavy stuff that race cars cannot, and pro GPUs can handle compute tasks that gaming GPUs struggle with.

    There are a number of tasks that gaming cards are "nerfed" for compared to pro cards. But thankfully 3D rendering is not one of them, and gaming cards can actually be faster thanks to their clockspeeds.

  • DripDrip Posts: 1,192

    Are the drivers even updated for iray yet? I remember it took months for the 20-series before they got proper iray support, and pretty sure the 30-series initially used the code from the 20-series as well, though in that case it was pretty compatible already and gave noticable improvements over the predecessor. So the 4060ti not being impressive might be a driver issue, which can take anywhere between a few days and many weeks to get sorted.

  • outrider42outrider42 Posts: 3,679

    Supposedly, the newest Iray 2023 improves Lovelace performance. So this is not just for the 4060ti, but the whole lineup. Actually, according to the Iray dev team, Iray 2023 improves performance for all GPUs, but Lovelace is biggest improvement. The SDK was released on June 7. Daz Studio has to update their Iray plugin, but no updates for DS have released since...April.

    They have not gone this long without releasing a new DS beta in a long time. They could be saving the update for DS5, but nobody ever tells what is going on. For all we know, DS5 could come out tomorrow...or it might still be another year(s) away. However I still think that would suck for Daz users. Not everybody is going to want to jump to DS5 right away for various reasons. So tying the update to DS5 is not the right thing to do.

    That's because waiting on putting the new update in DS5 is delaying Daz users access to the latest Iray. You guys have been working on DS5 for years. YEARS. If you have the plugin you need to get it out to your customers sooner, not later. Iray 2023 does more than improve speed, they list a bunch of bug fixes as well. It needs to release for DS4. If Daz is truly waiting on DS5, there is no telling how long that wait might be. I can certainly remember waiting ages for updates in the past.

    Iray 2023 is already available in Nvidia Omniverse. They released it immediately back in June. They call it "RTX Accurate" in that software. I have not tried Omniverse, and we do not have a bridge directly to it yet. Which funny enough, supposedly DS5 will. Maybe. Once again, Daz-Tafi finds unique ways to confuse their own customers, as the pic says the bridge is for "Nvidia Omniverse Enterprise". This implies the bridge being exclusive to Enterprise users, which is a very expensive professional subscription. Nobody has clarified if this is indeed the case. If it only for Enterprise, it is basically useless for us normal people. Even a Daz PA wouldn't be able to justify paying the fee.

    Omniverse itself is actually free to use. The Enterprise version, like the name implies, is for professional studios with multiple people. You are even required to buy a minimum number of seats for the Enterprise version, it is not for individuals. So again, even a small team working under a PA would not be big enough for an Enterprise edition. Not to mention the cost goes up for every seat you buy.

  • kyoto kidkyoto kid Posts: 41,057
    edited August 2023

    Kitsumo said:

    kyoto kid said:

    ...so the RTX 4090 outscores Nvida's "flagship" 8,000 USD 48 GB RTX 6000 Ada?  Would love to know what criteria was used since the 6000 only shows one benchmark while the 4090 has 306..

    I'm not an expert by any means but I think the pro cards have lower heat dissipation/fan setups because they're going to be in a forced air ventilated rackmount setup. Also, they're dual slot cards according to the webpage. And, data center customers are mostly concerned about lower power use (and A/C costs) and VRAM pooling than higher performance per card.

    Meanwhile, the consumer version can have a massive heatsink and take up 3 or 4 slots and extend far enough to block your SATA connectors on your drives. The typical home user is only going to buy one 4090, or maybe two, so they'll adjust their whole setup to accommodate for it.

    ...if I had the funds for an RTX 6000 Ada, which has a TDP just slightly higher than the old Maxwell Titan X I have, has double the VRAM, over 18,000 shader units, and fit's nicely in my existing case with 7 fans that provide excellent airflow, I would easily take that over a triple slot foot long card and likely never have to be concerned much about a scene staying in VRAM.

    Of course, that's if I had the funds. For now I'll stick to considering the 16 GB 4060 Ti as in my book, the more VRAM the less chance of he render process dropping to the CPU and having to spend time optimising textures and detail to keep a scene in VRAM, particularly when rendering in large size format.

    Even the next step down, 32 GB RTX 5000 Ada comes fairly close to the 4090 in specifications as it uses the same GPU

    @outrider42

    Indeed I'm looking more for hauling a heavy load than "race car" performance as I don't bother with gaming.  As I mention above more VRAM is preferable than blazing speed.  Before I got my Titan X I was still rendering in Iray on the CPU for years.  Even a three generation older card (based on the RTX 2xxx series) was a massive jump in performance as renders that often had to run overnight were completing in less than 20 - 30 min. 

    The one thing that holds me back was the lack of enough system memory to support heavier scenes which throttles the process due more frequent paging. An 8 GB scene in VRAM would exceed the 24 GB of available system memory (one hit about 28 GB) I currently have, given that about 1 GB of that is reserved for the OS and other system processes, resulting in render times that were upwards of 2h 30m (still far better than CPU rendering for a more complex scene).

    The new memory upgrade to 64 GB should alleviate that (and I still will have two empty DIMM slots to increase that to 128 if need be). 

    line break.jpg
    700 x 3 - 1K
    Post edited by kyoto kid on
  • jparks123jparks123 Posts: 37
    edited August 2023

    4 minutes 38 seconds for the benchmarking test scene (total, includng about 30 seconds setup before it starts rendering)

    I'll post formally to the benchmark thread shortly. It's fiddly to type in all the values it wants in the proper format.

    https://www.daz3d.com/forums/discussion/341041/daz-studio-iray-rendering-hardware-benchmarking/p1

    2023-08-01 23:21:40.442 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2023-08-01 23:21:40.442 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 4060 Ti): 1800 iterations, 28.430s init, 249.087s render

    7.22 iterations/sec

     

    Post edited by jparks123 on
  • outrider42outrider42 Posts: 3,679

    Remember, Iray is a little different from other tasks in that it is not nerfed on gaming cards at all, and actually runs faster on gaming cards like I explained. So pro cards only have one single benefit, VRAM. You have to really, really need that VRAM in order to justify such a card. It is hard to estimate how much someone truly needs, since scenes can vary so much.

    Quadros are highly specialized, while gaming cards are more multi purpose. If the gaming GPU has enough VRAM, then Quadro should not be any consideration at all.

    I can recall some people thinking about getting the A4000, which has 16gb of VRAM. The A4000 is roughly as fast as the 3070. Even today it still sells for about $1000 depending on where you are located. The 4060ti is a far better choice now IMO. Of course you can also grab a used 3090 for less, too, making the A4000 irrelevant for Iray. The only perk of the A4000 is being a single slot, so you can squeeze more of these in a space if you wanted to. But if that is the goal, the 4090 makes more sense.

    A used 3090 would be the best option in general, it would be the cheapest way to get 24gb and pretty darn fast renders. The 4090 has dropped a little in price, but not much. It will not drop too far until a successor releases. But a successor may not come until 2025. That is a long time to wait. I suspect that the 4090 will drop in price like a stone when the 5090 gets announced. We see that every generation. The 4090 might actually go back up after the 5090 launches, that has happened many times, too. But again, that is like 2025. We all have to consider what our respective budgets allow us to do, and what our priorities are.

    A little side note, I had a back and forth with some kid who refused to believe that people would buy gaming GPUs for things besides gaming. I know 3D rendering is not as common as gaming, but a lot of people do create stuff because...they simply want to create stuff. This can be a hobby just like playing video games. I am not sure why this is something they cannot understand. A LOT of gamers have no concept that people like us exist. They think only professionals use such software, and that professionals are going to buy the professional hardware. Which is nonsense. Daz Studio would not even be around today without its niche group of users. Daz is only one program out there in a sea of 3D software. There is a whole world of people who just want to create stuff to express themselves, not to make money. Some people might make occasional money on the side, but it isn't their goal. We run the full spectrum. There is no typical Daz user. A lot of people here do not play video games at all. I do play games, though I play a whole lot less than I used to. I can go months without booting up a game now.

  • kyoto kidkyoto kid Posts: 41,057

    ...granted I've been getting good results out of my 5 generation old Titan-X though again the RTX emulation hamstrings it a bit in the VRAM department.  

    Again I already have an RTX 3060 12 GB, but need to upgrade to a much newer MB with a more recent BIOS to actually use it.(it's back in the box for now)..

    12 GB seems fine for modest scene but I have pegged my Titan at about 10 GB in a more complex one (not overly)  which also had a number of emissive sources and lots of ray bounces as it was an interior scene.  That was one of the scenes which dumped to VM with its slow paging.  Granted more memory would fix that.  However given this scene is not as detailed as ones I did in 3DL, I am beginning to see that more VRAM would be better and that is part of the attraction of the pro grade cards. The other is they use less power and run cooler.

    True, a 5000 Ada will be more expensive than a 4090 (likely about where the A5000 was priced), but it would fit in my current system and generate less heat as I don't mess around with liquid cooling so less heat is easier to dissipate with air cooling (particularly in fan setup I have).  The 5000 Ada also has the same exact dimensions as the Titan-X whereas the 4090 takes  three slots and is an inch and a half longer as well as an inch wider (the 3090 is actually even bigger than the 4090, longer by another inch and a fraction wider).

    The more realistic proposition, the 4060 TI 16 on the other hand would easily fit with room to spare as it is slightly smaller than my old Titan. which means a bit more "breathing space". The only reason I was considering an A4000 was because at the time it was the only 16 GB GPU Nvidia offered (before the RTX 4xxx generation was released)  it pretty much doubled the number of shading units (cores) of my Titan-X (that were also 4 generations newer) as well as added the benefit of Tensor and RT cores. Indeed the 4060 Ti surpasses that and is one more generation advanced. 

     

  • outrider42outrider42 Posts: 3,679

    We don't have a bench for the A4000, but the 4060ti is probably right around its speed. The A4000 actually uses the 3070ti chip, not the 3070, but it performs more like a 3070 because of the clockspeeds.

    If you did get a 4090, you can always reduce the power if you don't want the heat. But users have reported that 4090 is only using 280 Watts while rendering Iray. This is pretty close to the Titan Maxwell. It is nothing like the room heater some people said it was going to be. If you play video games you might use more a bit more power, but that's a non issue for you. So you don't need liquid cooling as it is, and you can simply turn down the power if need be. You can turn the power down to match the what the A5000 uses.

    As for size, if the 4090 is too big for the case, then get a new case. That might sound extreme, but that is way cheaper than any A5000. A modern case like I have will have more than enough space for a 4090. I have a 3090 and a 3060 together in my case quite comfortably. The only potential issue with a 4090 in my case would be if I wanted to keep a 2nd GPU installed. I am pretty sure I can still fit a 4090 and a 3090 in my case. The gap between them would be tight, but it would work. Of course a smaller card like a 3060 will fit just fine with the 4090.

    My 3090 is the FE model, so it is not as comically huge as some 3rd party models. You can see there is a lot of space around my 3090, even with the 3060 installed above it. For 2 GPUs, it is a good idea to put the smaller one on top.

    If you are concerned about a place for a disc drive, you can get external USB drives super cheap. Again, way cheaper than going to A5000. My case was like $100, Coolermaster H500 I believe. They have a lot of variations of this with slightly different prefixes, as it is a popular case, but the share basically the same chassis. I just leave the case open because that's how I roll, lol. Though I didn't care for the tempered glass side panel, that's the main reason I left it off. It runs slightly cooler wide open like this. Things are modular, and easy to work with. There is a hard drive bay that can fit 2 HDDs or several SDDs, but I removed it. Its location would be to the right of the power supply, you can see some notches on the floor for it, it can moved in one of three positions if you use it. The SSDs are mounted on the other side where you cannot see them, and I have a M.2 on the motherboard. I have a couple of USB drives for backup.

    Modern cases are also way easier to work in than old ones. Anybody who has not bought a case in a long time will be fairly pleased with how much easier these are to deal with than something from say 10 years ago.

    Anybody who doesn't want RGB on the fans can simply turn them off. While I do not care for the "gamer" style we often see, I do have a soft spot for pretty lights. I have plasma balls and lava lamps on shelves, too.

  • PetercatPetercat Posts: 2,321

    kyoto kid said:

    ...granted I've been getting good results out of my 5 generation old Titan-X though again the RTX emulation hamstrings it a bit in the VRAM department.  

    Again I already have an RTX 3060 12 GB, but need to upgrade to a much newer MB with a more recent BIOS to actually use it.(it's back in the box for now)..

    Wow. How old is your computer? One of mine has an i7 7700, and it has an RTX 3060 in it that works fine. It's a Dell 3620 that's so old it won't recognize a hard drive larger than 1TB. (It thinks my 2TB drive is two 1TB hard drives, E and F).

  • marblemarble Posts: 7,500
    edited August 2023

    I bought a RTX 4070 12GB to replace my failed RTX 3090 but that was before I was aware of the availability of the 4060ti with 16GB. Now I have had a confirmation that my warranty claim for the 3090 has been approved and that I have been given store credit to replace it. I'm seriously thinking of adding the 4060ti to my 4070. Does anyone see any reason not to? I am assuming that IRay will use both cards (at least up to the 12GB limit) and drop to the 4060ti if I happen to exceed 12GB.

    What I am worried about is the possibility that IRay will try to use the maximum VRAM available and thus constantly exceed the 12GB limit of the 4070 making it redundant. I've already noticed with my 4070 that even modest scenes seem to take almost all of the 12GB according to GPU-Z although I can add another character to the scene and it still doesn't drop to CPU.

    Post edited by marble on
  • jparks123jparks123 Posts: 37
    edited August 2023

    marble said:

    I bought a RTX 4070 12GB to replace my failed RTX 3090 but that was before I was aware of the availability of the 4060ti with 16GB. Now I have had a confirmation that my warranty claim for the 3090 has been approved and that I have been given store credit to replace it. I'm seriously thinking of adding the 4060ti to my 4070. Does anyone see any reason not to? I am assuming that IRay will use both cards (at least up to the 12GB limit) and drop to the 4060ti if I happen to exceed 12GB.

    What I am worried about is the possibility that IRay will try to use the maximum VRAM available and thus constantly exceed the 12GB limit of the 4070 making it redundant. I've already noticed with my 4070 that even modest scenes seem to take almost all of the 12GB according to GPU-Z although I can add another character to the scene and it still doesn't drop to CPU.

    Another option is to sell the 4070 and combine with the store credit to trade up a single card 4080. It's going to be a lttle slower as it's a single GPU system, but it's also a lot less aggravation and the resale value of a 4060TI will be bad as gamers do not desire it.

    Post edited by jparks123 on
  • outrider42outrider42 Posts: 3,679

    That's up to you. You are the only one who can truly say who much VRAM you need. Since you had a 3090 before, and then a 4070, you are in a unique position to know how much you tend to use. Most of the time people ask this question, they are coming from a lower VRAM capacity, and that can be really hard to estimate how much they really need. If you consistently push past 12gb, you know what your options are...not a lot to be honest. 4060ti, 4080, 4090. 

    If you are getting back the full value of your 3090 you should be able to skip it all and get a 4090 and not worry about any of it. 4090s are going below MSRP now, so there should be one for nearly the same or less than the 3090 was.

    The thing with Lovelace is that the gaps between tiers are seriously massive, and it is kind of sad that things are like this. This adds more complication to the equation, because the 4070 only has 12gb, but it is so much faster than the 4060ti. When you factor the price, a 4070 is not that much more than a 4060ti 16gb. It is so frustrating.

    Here are some iteration counts from the benchmark thread. It is important to note the benchmark is not absolute. The performance can vary by scene, this is a ballpark number. Also these may not be from the exact same version of Daz Studio, as a recent update slightly changed some numbers.

    3060  6.6

    4060ti  7.23

    4070  12.71

    4070ti  14.175

    4080  16.5 to 19.6

    3090  16.7

    4090  28.5

    A5000 14.36

    Titan RTX (Turing)  8

    I think this puts things into perspective better. On one hand, the 4060ti is really close to a Titan RTX, which is faster than a 2080ti. I'd say the 4060ti is tiny bit faster than a 2080ti. So that part is kind of cool. But the 4070 is leaps faster. The 4070ti is basically twice as fast. The 4080 is more than twice as fast. And the 4090 is almost FOUR TIMES AS FAST as the 4060ti.

    Just looking back at Ampere, the 3090 is only 2.5 times faster than the 3060. I am not even talking about the 3060ti. The 4060ti should be a tier above, yet is only a quarter the 4090? For that matter, the 4070 is not even half as fast as the 4090! The 4070ti is almost half as fast. So the 4070ti has half the VRAM and speed of a 4090. I recall the 1070 being half what the 1080ti was, but now, it is the ti model.

    It is quite ridiculous that the top card can be so much faster than a $500 card. In terms of bang for buck, the 4090 is actually at the top, which is simply unheard of. The gap between the 4080 and 4090 is one of the largest we have ever seen between the top two cards. That is why I am not so enthusiastic about the 4080. It has some performance, but the price is not right.

    So what the data really shows is...try to buy a 4090 if possible, LOL. You get the 24gb VRAM and the fastest GPU on the planet by a country mile. If the 4090 is not an option, the choices get harder and you have some tougher decisions to make.

  • marblemarble Posts: 7,500

    As always, some good information, thanks. 

    Firstly I should point out, as I seem to need to do in every post these days, that I am not American. I do not live in the USA. I do not have access to the pricing that US consumers get. I pay in NZ dollars and a 4090 here is not selling below MSRP here. I had a quick look at the price of a 4090 at Best Buy (USA) and it is selling for $1600 USD. At today's exchange rate that is $2600 NZD. The best price I can see for a 4090 at the store I have the store credit is $3300 NZD. I paid around $2600 NZD for the 3090 plus I have now paid out for the 4070. I don't have all that extra cash lying around to buy a 4090, no matter how desireable. Also, the specs for the 4090 insist that I need a 1000 W PSU and mine is 880W.

    So my thinking at the moment is to use the sttore credit to buy a second 4070. When I watch GPU-Z as I have a render going, the 4070 does not seem to draw much power - it peaks around 140W. So I am assuming that my 880W PSU will cope with two 4070s. If I am wrong, please comment.

  • IceCrMnIceCrMn Posts: 2,129

    "3060  6.6

    4060ti  7.23"

     

    Well, that answers my question.

    The 4060ti is NOT worth the $500 USD asking price if one currently owns the 12GB 3060. The 4GB extra isn't worth it. It's more or less the same card performance wise for Studio. 

    I'll keep using my 3060 until the prices come down on an real upgrade.

  • marble said:

    As always, some good information, thanks. 

    Firstly I should point out, as I seem to need to do in every post these days, that I am not American. I do not live in the USA. I do not have access to the pricing that US consumers get. I pay in NZ dollars and a 4090 here is not selling below MSRP here. I had a quick look at the price of a 4090 at Best Buy (USA) and it is selling for $1600 USD. At today's exchange rate that is $2600 NZD. The best price I can see for a 4090 at the store I have the store credit is $3300 NZD. I paid around $2600 NZD for the 3090 plus I have now paid out for the 4070. I don't have all that extra cash lying around to buy a 4090, no matter how desireable. Also, the specs for the 4090 insist that I need a 1000 W PSU and mine is 880W.

    Don't forget that US prices don't include sales taxes, which vary by state. I think you probably have a sals tax similar to VAT aplied and shown in the store prices, though I don't know how much of the apparent difference that accounts for. I see quite a few 4090s for around £1,700 which isn't too bad, allowing for excahange rate and VAT.

    So my thinking at the moment is to use the sttore credit to buy a second 4070. When I watch GPU-Z as I have a render going, the 4070 does not seem to draw much power - it peaks around 140W. So I am assuming that my 880W PSU will cope with two 4070s. If I am wrong, please comment.

  • marblemarble Posts: 7,500

    Richard Haseltine said:

    marble said:

    As always, some good information, thanks. 

    Firstly I should point out, as I seem to need to do in every post these days, that I am not American. I do not live in the USA. I do not have access to the pricing that US consumers get. I pay in NZ dollars and a 4090 here is not selling below MSRP here. I had a quick look at the price of a 4090 at Best Buy (USA) and it is selling for $1600 USD. At today's exchange rate that is $2600 NZD. The best price I can see for a 4090 at the store I have the store credit is $3300 NZD. I paid around $2600 NZD for the 3090 plus I have now paid out for the 4070. I don't have all that extra cash lying around to buy a 4090, no matter how desireable. Also, the specs for the 4090 insist that I need a 1000 W PSU and mine is 880W.

    Don't forget that US prices don't include sales taxes, which vary by state. I think you probably have a sals tax similar to VAT aplied and shown in the store prices, though I don't know how much of the apparent difference that accounts for. I see quite a few 4090s for around £1,700 which isn't too bad, allowing for excahange rate and VAT.

    So my thinking at the moment is to use the sttore credit to buy a second 4070. When I watch GPU-Z as I have a render going, the 4070 does not seem to draw much power - it peaks around 140W. So I am assuming that my 880W PSU will cope with two 4070s. If I am wrong, please comment.

    Indeed, each time I've visited the US I have been caught out by expecting to pay the "sticker price" only to be reminded that sales tax will be added. One of those peculiarities about American life (I gather they still use cheques - sorry checks - to pay for things). GBP 1,700 is roughly the price I would pay here in NZ allowing for the current conversion rate.

  • PetercatPetercat Posts: 2,321

    Another thing to consider is how fast your render speed needs to be.
    For example, if your workflow involves multiple activities, such as
    writing the storyline while the image is rendering, then you might
    not need the fastest rendering.
    My twin 3060s are sufficient to my needs, for example, so no need
    to spend more. Slow loading times in Studio are more of a time waster,
    so my money will be better spent on some fast 4TB NVME drives.

  • outrider42outrider42 Posts: 3,679

    IceCrMn said:

    "3060  6.6

    4060ti  7.23"

     

    Well, that answers my question.

    The 4060ti is NOT worth the $500 USD asking price if one currently owns the 12GB 3060. The 4GB extra isn't worth it. It's more or less the same card performance wise for Studio. 

    I'll keep using my 3060 until the prices come down on an real upgrade.

    It is a superior GPU to the 3060 in every possible way. It just may not be worth upgrading from a 3060. It uses less energy to do work, 4gb more VRAM, and is indeed faster. One iteration may not seem like a lot, but this is a very basic benchmark. In previous generations one iteration separated entire tiers from each other. That can translate to some much larger gaps in complex renders. The extra 4gb is very much worth it to some people. Otherwise we would never have discussed this card to begin with. The Lovelace line blows up things with far more spacing between cards, with one exception.

    Like the 4070 and 4070ti, nobody would say they are the same, but they are not even 2 iterations apart (1.465 to be exact.) That is a tier above the other at the lower range. Also, I listed my 3060, which for some reason does slightly better than most.

    The 16gb is the starting and end point of all interest in this GPU. And again, it beats a 2080ti. Anybody who has stuff older than Ampere is looking at an upgrade in every way.

    The problem is the price, like a lot of the Lovelace lineup. If this card was cheaper, it would have a completely different reception.

    I cannot know pricing all over the world. I assume pricing to be generally worse across the whole line, which balances things out some. Still, the 4090 is such a step up, that a 4060ti+4070 is still not close to it. What makes it worse is that the faster 4070 is VRAM limited, so if a scene is over 12gb, that means it will render one quarter as fast as one 4090, and it is also slower than your original 3090. To me that is tough pill to swallow, because you had a 3090. I wouldn't want the possibility of rendering slower than the card I used to have. It is a real bummer your 3090 died, that was the sweet spot. The 4080 appears to be faster than a 3090 now (it wasn't at launch, I may need to retest my 3090 to verify.) But I don't know what your options are down there.

Sign In or Register to comment.