OT: Nvidia 3000ti series cards not so great?

Ghosty12Ghosty12 Posts: 2,065
edited July 2021 in The Commons

So was just watching the video below and what I saw was interesting and had me thinking.. The premise of the video was about talking about the Nvidia 3060 12G, 3070ti, 3080ti, 3090, A6000 and so on.. My take on the video is to avoid the 3080ti, and look at any of the other models including the A6000 or other variants?

Update: After reading the posts here, I thought for a bit and realized that there are probably some extenuating circumstances to what was going on in the video, in regards with the talk of power and heat of the video cards.. And made a small change to my original post since I probably jumped on the bandwagon a bit too quick.. lol

Post edited by Ghosty12 on
«1

Comments

  • DripDrip Posts: 1,206

    I simply refer to the iray benchmarking thread we have running here, which has been pretty reliable so far, and find something there that matches my budget.

    Although I do game a lot, I'm not too bothered by games graphicswise. I grew up gaming on a Commodore 64, and learned to judge games by their mechanics more than their appearance. So, in case performance drops, I have no trouble lowering the graphics quality a few notches. There's plenty mediocre looking games that I love and still regularly play, and there's also plenty awesome looking games which I don't play simply because their gameplay isn't much to write home about.
    For iray rendering on the other hand, output quality and rendering speed is all that matters regarding my choice of graphicscard, meaning, if budget permits, I'd pick a 3080Ti over a 3060 12GB anytime.

  • Ghosty12Ghosty12 Posts: 2,065

    The problem here is not so much that the 3080ti is a bad card per se, it is just according to that video it is power hungry even at idle and supposedly does get rather warm due to the use of GDDR6X.. For me if a could I would go for a A4000 or A5000 on the expensive end and a 3060 12G on the cheap end, considering I am still rocking an awesome 1070ti..

  • GoOutlierGoOutlier Posts: 13

    Can't say I agree with his points. I'm sitting here with a 3080ti running 3 monitors, browser windows open, twitch stream running and it's averaging <40 watts on the GPU since I loaded HWinfo to monitor it, gpu fans usually completely off, depending on what things I have on screen. I'm questioning how he did his overclock to get those power issues and dismissing underclocking or undervolting (especially) these cards. But yes when it comes to the chip temps the FE 3080ti specifically will run hotter because it's basically a 3090 core in the 3080 FE's cooling solution.

  • kyoto kidkyoto kid Posts: 41,244
    edited July 2021

    ...yeah the A5000 tops out at 230W (20 less than my old Titan-X,) while the 3080 Ti is rated at 350W. As I don't bother with games, If I had the funds and Daz supported Linux, I'd get one of those. 

    It is also a dual rather than triple slot card like the 3090 and has the same dimensions as my Titan-X.  The only rub is they apparently already dropped support for W7 drivers on the Ampere pro grade RTX cards and W10 will never touch my system. 

    Post edited by kyoto kid on
  • outrider42outrider42 Posts: 3,679
    The thing to keep in mind is that Tom is not hardware reviewer, and even points that out in the video. Other reviews were able to demonstrate a more visible gap between the 3080ti and the 6800XT, especially beyond gaming. Indeed, in many software the 3080ti will outright stomp the 6800xt. His channel is heavily gaming focused, which just doesn't apply to us. The idle power issue has to be a fluke, because you can find reviews that directly contradict this. There has to be something going on with his setup causing abnormal idling. That doesn't discount his entire review, but it does raise some questions.

    That said the 3080ti really is a poor value, and pretty much every reviewer will agree with that. The biggest problem is pricing. Assuming MSRP (a tough assumption to make, I know), the 3080ti asks $1200. This is so much more than a 3080 for a very small performance gain, and for only $300 more you can get a 3090 with double the VRAM. You could even start looking at Quadro class cards at this price point which offer more VRAM though not as much performance.

    If the 3080ti had been listed at $1000 I don't think the reviews would be quite so harsh. But it is what is, at $1200 the 3080ti is a pretty bad value.

    As stated earlier, we have our own benchmark thread which tests Daz Iray. It is likely the only place on all of the internet that contains so much data specific to Iray, and specific to Daz's implementation of Iray. Because as 4.14 proved, Daz can impact Iray performance itself by changing how it handles certain things. In this case, the way normal maps were handled was changed in 4.14, so even though 4.14 has the same version of Iray as 4.12 did, there is noticeable performance improvement when you have models using normal maps in your scene. You will not find this kind of information anywhere else but here.

    So with our benchmark thread in mind, you will find a lot of answers about performance in Iray. The only other question is going to concern VRAM. When it comes to content creation, how much VRAM you need is going to vary wildly depending on what you do. So no benchmark thread can really help you in that regard. You'll have have to decide what amount is best for. It is a real shame that Nvidia chose to be skimpy on capacity compared to AMD here. The only Ampere with 16gb of VRAM is a Quadro that is roughly similar to the 3070 in performance. At 12gb, we only have the 3060 and 3080ti. Then the 3090 sits at 24gb. And finally the A6000 packs 48gb.

    Keep in mind whatever VRAM capacity you go with you will need enough RAM to actually get the most out of it. It is totally possible to run out of RAM before running out of VRAM if you do not spec correctly. I can use over 45gb of RAM in a scene while using only 10gb of VRAM on my 1080ti. This is fairly extreme, but it demonstrates just how extreme it can be. If my PC had 32gb of RAM, then in this scene Daz would simply crash when attempting to render. Rather, I have 64gb. If I bought a 3090 tomorrow, I could potentially run out of my 64gb of RAM before tapping all of the 3090's 24gb. And this has actually happened, a user posted a video where he did exactly that.
  • Ghosty12Ghosty12 Posts: 2,065
    edited July 2021

    outrider42 said:

    The thing to keep in mind is that Tom is not hardware reviewer, and even points that out in the video. Other reviews were able to demonstrate a more visible gap between the 3080ti and the 6800XT, especially beyond gaming. Indeed, in many software the 3080ti will outright stomp the 6800xt. His channel is heavily gaming focused, which just doesn't apply to us. The idle power issue has to be a fluke, because you can find reviews that directly contradict this. There has to be something going on with his setup causing abnormal idling. That doesn't discount his entire review, but it does raise some questions.

     

    That said the 3080ti really is a poor value, and pretty much every reviewer will agree with that. The biggest problem is pricing. Assuming MSRP (a tough assumption to make, I know), the 3080ti asks $1200. This is so much more than a 3080 for a very small performance gain, and for only $300 more you can get a 3090 with double the VRAM. You could even start looking at Quadro class cards at this price point which offer more VRAM though not as much performance.

     

    If the 3080ti had been listed at $1000 I don't think the reviews would be quite so harsh. But it is what is, at $1200 the 3080ti is a pretty bad value.

     

    As stated earlier, we have our own benchmark thread which tests Daz Iray. It is likely the only place on all of the internet that contains so much data specific to Iray, and specific to Daz's implementation of Iray. Because as 4.14 proved, Daz can impact Iray performance itself by changing how it handles certain things. In this case, the way normal maps were handled was changed in 4.14, so even though 4.14 has the same version of Iray as 4.12 did, there is noticeable performance improvement when you have models using normal maps in your scene. You will not find this kind of information anywhere else but here.

     

    So with our benchmark thread in mind, you will find a lot of answers about performance in Iray. The only other question is going to concern VRAM. When it comes to content creation, how much VRAM you need is going to vary wildly depending on what you do. So no benchmark thread can really help you in that regard. You'll have have to decide what amount is best for. It is a real shame that Nvidia chose to be skimpy on capacity compared to AMD here. The only Ampere with 16gb of VRAM is a Quadro that is roughly similar to the 3070 in performance. At 12gb, we only have the 3060 and 3080ti. Then the 3090 sits at 24gb. And finally the A6000 packs 48gb.

     

    Keep in mind whatever VRAM capacity you go with you will need enough RAM to actually get the most out of it. It is totally possible to run out of RAM before running out of VRAM if you do not spec correctly. I can use over 45gb of RAM in a scene while using only 10gb of VRAM on my 1080ti. This is fairly extreme, but it demonstrates just how extreme it can be. If my PC had 32gb of RAM, then in this scene Daz would simply crash when attempting to render. Rather, I have 64gb. If I bought a 3090 tomorrow, I could potentially run out of my 64gb of RAM before tapping all of the 3090's 24gb. And this has actually happened, a user posted a video where he did exactly that.

    This is one of the hard things with these types of videos, and yes it could be an unfortunate fluke for the problems he was having.. With techtubers I more often than not listen to the likes of Gamers Nexus, Hardware Unboxed and a couple of others.. And true pricing does play a big part but charging so much for a 2 Gig Vram increase, still has me scratching my head..

    Have seen that thread, has some interesting information, just making sense of it all is the part I have to do now.. lol  And reading some of them the 3060 seems like a reasonable card to have, although getting one would be the thing.. smiley

    And I always forget that part about system ram, here is me thinking that 32 Gig is enough.. lol  Although for most things it is, but at the moment pretty much all of this is moot to me as I would love to get another card, I know what I would like but at the moment trying to get one is near impossible still, and even if I could I would probably have to sell a kidney and liver right now to get one.. blush

    Still hanging out and waiting for GPU pricing the come back to earth.. laugh

    Post edited by Ghosty12 on
  • kyoto kidkyoto kid Posts: 41,244
    edited July 2021

    ...yeah I've come up to 18 GB while rendering even though in VRAM a scene may be 5 - 6 GB.  I'd feel more comfortable if I could upgrade to 48 GB but those old workstation/server boards are hard to find in single socket versions. There are some who successfully coaxed an X-58 MB to support 48 GB but it seems hit and miss as it also involves BIOS tweaks. 

    A full/new system rebuild is out of the question on my budget.

    Post edited by kyoto kid on
  • Ghosty12Ghosty12 Posts: 2,065
    edited July 2021

    kyoto kid said:

    ...yeah I've come up to 18 GB while rendering even though in VRAM a scene may be 5 - 6 GB.  I'd feel more comfortable if I could upgrade to 48 GB but those old workstation/server boards are hard to find in single socket versions. There are some who successfully coaxed an X-58 MB to support 48 GB but it seems hit and miss as it also involves BIOS tweaks. 

    A full/new system rebuild is out of the question on my budget.

    It is amazing sometimes how much system ram is being used, and here I am going nuts shrinking textures and so on to fit it on my 1070ti.. crying Yup it can be a right pita to get a decent system these days at a price that won't leave one homeless..

    Post edited by Ghosty12 on
  • KrzysztofaKrzysztofa Posts: 226

    TIL the A6000 exists and has 48GB of vram, holy cow. I could probably render Kiko with all her fur with that GPU wink

  • kyoto kidkyoto kid Posts: 41,244
    edited July 2021

    ...yeah I've concocted systems using slightly older server hardware with 128 GB of memory using older Xeon CPUs to support my Titan-X.  The best I can find is a single socket Supermoicro board that can support up to 512 GB EEC memory (8 DIMMS and 2 PCIE 3.0 xa6 expansion slots) a Haswell 12 core 2.6 GHz Xeon, and 128 GB of DDR4 2133 memory (8 x 16 GB Hinyx ECC) for around 1,100$  a couple hundred less than it cost me to build my original 12 GB system 9 years ago.

    Post edited by kyoto kid on
  • takezo_3001takezo_3001 Posts: 1,997

    The biggest piece of crap card is the 3070 ti, as it not only is a gimped 3080, and has a measly 8 Gb VRAM making it useless for rendering, and even 4K gaming (Hell, even 1440p with some titles and more in the future!) which means that it will become obsolete in its own generation! And for only $100 more you could get a 3080 instead with 2Gb more VRAM to boot!

    When prices finally equalize back to MSRP levels that is... It's like the worthless paltry 6Gb 1600 series that too may have wasted their money on!

  • UHFUHF Posts: 515

    Don't over think this.

     

    Check out Octane Bench, its just raw horse power comparisons, and you'll see that the 3080TI is just fine.  Just look through single GPU comparisons.

    https://render.otoy.com/octanebench/

     

    In my opinion the 3090 with 24GB of RAM is the better option for rendering IRay in that you won't have to spend time optimizing scenes for rendering.  3080 TI with 12GB will work for any simple scenes with zero compression.  More characters may necessitate a bit of effort to optimize the scene.

     

    You should also be aware that you'll probably not be able to use the 48GB VRAM in the A6000 video card.  The only thing harder on memory than IRay is Daz Studio.   Rauko did a video where he tried to load as many characters as he could into his 24GB 3090.  His 64GB PC ran out of RAM before the video card at 14 characters (17GB of memory used in the 3090).

  • outrider42outrider42 Posts: 3,679

    UHF said:

    Don't over think this.

     

    Check out Octane Bench, its just raw horse power comparisons, and you'll see that the 3080TI is just fine.  Just look through single GPU comparisons.

    https://render.otoy.com/octanebench/

     

    In my opinion the 3090 with 24GB of RAM is the better option for rendering IRay in that you won't have to spend time optimizing scenes for rendering.  3080 TI with 12GB will work for any simple scenes with zero compression.  More characters may necessitate a bit of effort to optimize the scene.

     

    You should also be aware that you'll probably not be able to use the 48GB VRAM in the A6000 video card.  The only thing harder on memory than IRay is Daz Studio.   Rauko did a video where he tried to load as many characters as he could into his 24GB 3090.  His 64GB PC ran out of RAM before the video card at 14 characters (17GB of memory used in the 3090).

    That was the video I was talking about.

    Everything is variable though. I can max out my 1080ti's 11GB VRAM while using 28GB of system RAM. But another scene maxes the VRAM and is using 45 or even 50GB of RAM. So it is absolutely possible to use the 12GB 3080ti with 32GB of RAM, but it is indeed possible to exceed 32GB. It just depends on what kinds of things you do. That is very wide difference between those scenes. Iray compresses texture data into its very own special format that it sends to VRAM. The settings for compression are in the advanced settings for Iray, and determine what size textures get compressed. But the textures remain uncompressed in RAM, so this is one factor in why some scenes can use drastically more RAM than others. However I don't believe Iray compresses geometry in any significant way.

    The important thing here is that people just need to be aware that they can run out of RAM, and that VRAM is not always the answer. If you own a PC that only has 32GB of RAM, and it is not capable of adding more, then a 3090 will do you no good for now. In this specific situation the 3080ti would actually make more sense, as it offers nearly all the performance of a 3090 but does cost less. The VRAM and RAM would be more balanced so to speak. You may run into situations where a scene ends up using more RAM than you have, but I think a 8GB card would just not be enough.

    To make use of a A6000 with 48GB, you would certainly need at least 128GB of RAM. Probably more. However, you would still be able to clear a lot more VRAM than any other card before running out of that RAM, so that can be a consideration as well. Like in the Rauko video, while he did run out of RAM before VRAM, I think it is still noteworthy he managed to get to 17GB of VRAM in the first place. He would not have been able to get that far with just a 12GB 3080ti. So in this situation, he still benefits from having the 3090 in his system. So if you have 128GB of RAM, while you might not be able to use all 48GB of a A6000, you would still be able to render larger scenes than a 3090 could.

  • GoOutlierGoOutlier Posts: 13

    outrider42 said:

    The thing to keep in mind is that Tom is not hardware reviewer, and even points that out in the video. Other reviews were able to demonstrate a more visible gap between the 3080ti and the 6800XT, especially beyond gaming. Indeed, in many software the 3080ti will outright stomp the 6800xt. His channel is heavily gaming focused, which just doesn't apply to us. The idle power issue has to be a fluke, because you can find reviews that directly contradict this. There has to be something going on with his setup causing abnormal idling. That doesn't discount his entire review, but it does raise some questions.

    I'm not trying to sound like I'm overly defending my purchase, and I'm not that familiar with him, but it does raise a few questions. It seems like he has a questionable overclock or power issue, it's weird that he says it idles at 2.5-3x what any other reviewers get, says other people say the same thing about big idling wattage (really?), and doesn't investigate that there may be something wrong (2 minutes of googling to check against his results) before publishing. He also complains about power usage and heat when sitting in a game menu that is likely unoptimized and/or uncapped FPS. He's saying that he's running "at 400 watts" on an FE card in the gameplay video clips, when I don't remember 3090 FEs pulling that much full board power at sustained load in other reviews. He complains that it "only has the VRAM of a 3060" but doesn't mention how much faster the 3080ti is, or point out that the 3060 is the outlier in the 3000 series when it comes to VRAM, not the 3080ti. He shows some charts but on the other hand says things like "[other card] could be as fast as..." or "feels like it's using double the power of [other card]..." that either contradict other reviewers or come across as subjectivity that does makes me question the rest of the content.

    Like you go on to say there are a lot of things to balance when it comes to value and individual needs for speed, VRAM, cost. For me the 3080ti was more of a purchase of opportunity, I had the chance to get one so I pulled the trigger. I did want that higher-tier of perfomance of 3080 or more, but coming from an 8GB card, I think I needed more than 10, but didn't really see myself needing 24. If I could have easily gotten a 3090 at MSRP I might have, but on the other hand I could use the actual price difference to get RTX in my second machine, or put it towards the budget for a 12th Gen or AM5 rig, etc.

    And yes, the forum Iray benchmark thread is very helpful and I'll be posting my initial 3080ti results there along with a last-minute, latest Iray and driver bench of a 1070 prior to the swap. I know you're active there and always encouraging the search for more information. Big thanks to you and RayDAnt and the contributors for that thread.

     

  • Ghosty12Ghosty12 Posts: 2,065

    outrider42 said:

    UHF said:

    Don't over think this.

     

    Check out Octane Bench, its just raw horse power comparisons, and you'll see that the 3080TI is just fine.  Just look through single GPU comparisons.

    https://render.otoy.com/octanebench/

     

    In my opinion the 3090 with 24GB of RAM is the better option for rendering IRay in that you won't have to spend time optimizing scenes for rendering.  3080 TI with 12GB will work for any simple scenes with zero compression.  More characters may necessitate a bit of effort to optimize the scene.

     

    You should also be aware that you'll probably not be able to use the 48GB VRAM in the A6000 video card.  The only thing harder on memory than IRay is Daz Studio.   Rauko did a video where he tried to load as many characters as he could into his 24GB 3090.  His 64GB PC ran out of RAM before the video card at 14 characters (17GB of memory used in the 3090).

    That was the video I was talking about.

    Everything is variable though. I can max out my 1080ti's 11GB VRAM while using 28GB of system RAM. But another scene maxes the VRAM and is using 45 or even 50GB of RAM. So it is absolutely possible to use the 12GB 3080ti with 32GB of RAM, but it is indeed possible to exceed 32GB. It just depends on what kinds of things you do. That is very wide difference between those scenes. Iray compresses texture data into its very own special format that it sends to VRAM. The settings for compression are in the advanced settings for Iray, and determine what size textures get compressed. But the textures remain uncompressed in RAM, so this is one factor in why some scenes can use drastically more RAM than others. However I don't believe Iray compresses geometry in any significant way.

    The important thing here is that people just need to be aware that they can run out of RAM, and that VRAM is not always the answer. If you own a PC that only has 32GB of RAM, and it is not capable of adding more, then a 3090 will do you no good for now. In this specific situation the 3080ti would actually make more sense, as it offers nearly all the performance of a 3090 but does cost less. The VRAM and RAM would be more balanced so to speak. You may run into situations where a scene ends up using more RAM than you have, but I think a 8GB card would just not be enough.

    To make use of a A6000 with 48GB, you would certainly need at least 128GB of RAM. Probably more. However, you would still be able to clear a lot more VRAM than any other card before running out of that RAM, so that can be a consideration as well. Like in the Rauko video, while he did run out of RAM before VRAM, I think it is still noteworthy he managed to get to 17GB of VRAM in the first place. He would not have been able to get that far with just a 12GB 3080ti. So in this situation, he still benefits from having the 3090 in his system. So if you have 128GB of RAM, while you might not be able to use all 48GB of a A6000, you would still be able to render larger scenes than a 3090 could.

    I have always wondered about that with regards to geometry.. While getting textures under control is a given, in some cases having too much geometry in a scene can be a problem as well, why we now have more people using instancing whenever they can.. Seems that shrinking geometry down is not as easy or can be done as easily as it is with textures for now anyway..

    As for me when I can I probably will go for a 3060 12G, or maybe a 3080 will wait and see what happens I suppose as still way too expensive.. lol

  • kyoto kidkyoto kid Posts: 41,244

    ...what would help immensely is being able to close the scene and Daz programme (which takes a major cut out of system memory) after submitting a scene to the Rerender engine.  Reality/Lux did this.  Needing to keep the scene open in Daz is part of the system memory hog.

    Iray needs true batch/background rendering, not just through Iray Server which costs extra. 

  • outrider42outrider42 Posts: 3,679

    Geometry is hard to mask in Iray. If you use Genesis at base resolution, it will be immediately obvious in almost any render unless these are far away from the camera or the render is very low resolution. However texture compression can work pretty well if done right, and may not be noticeable until very close to the camera. Though as I type this I often find myself using AI upscalers to upscale many environmental textures on various props and things for Daz. I see many floor textures that just do not hold up. However this is more because of the way the textures were originally made.

    There is another thing, too, if your cards can Nvlink (the 3090 is the only 3000 series that can), you can pool VRAM in Iray. However it cannot share all the data. The cards will pool texture data, but NOT geometry data. So you will be able to use a lot more memory, but only to a certain degree. I just find this interesting as we are discussing geometry at the moment. But this is another lesser known option for Iray users. Turing has a lot more Nvlink capable cards, the 2070 Super, 2080, 2080 Super, 2080ti, and Titan RTX can all Nvlink with second card to pool their memory this way. The second card must be the same model here, so that the Nvlink connector will fit. IMO Nvidia may have thought that Nvlink was too good, and moved to restrict it with Ampere, since the 3090 is the only one that can use it. Of course Quadro class cards can still use, at least the higher range ones. The low end Quadros do not have Nvlink, which I personally feel is pretty shocking.

    Back to Moore's Law, he kind of has a tendency to favor AMD in his videos. He does it somewhat subtly most of the time, just in how he phrases things and his tone, but sometimes he is not subtle at all like in this video. I don't think he even realizes how much he does it, and this is also likely a product of how his viewership reacts. A lot of people really do not like Nvidia right now, at least a lot of people who make comments and videos. If he was nicer to Nvidia he might actually lose viewers. And I can get that, it is literally hip to bash Nvidia. Nvidia has not been very consumer friendly to put it mildly in recent years. But that is not an excuse for people reviewing tech to lean one way or another in their reviews. Tom has at times spoke negatively about AMD, but he generally will soften any such blow by mentioning Nvidia doing something worse in the very next sentence, so that the negative AMD comment is glanced over.

    He is still a good leaker, that is his main calling card. He has built up a lot of contacts in the industry who help keep him informed of things that might be coming. A lot of his leaks are pretty accurate, and when they are off, it is often because the company decided to go in a different direction, not that the leak was necessarily wrong. But he is clearly not a reviewer, and at least he admits it as he suggests going to Hardware Unboxed and GamersNexus for better reviews. Plus most of his content is very gaming focused, though he does occasionally have some guests on to discuss servers and the broader tech industry, he generally looks at things from a gaming perspective. His interview with Daniel Nenni was really great, offering a lot of incite into the semiconductor industry, and clearing up a lot of misconceptions. If you are interested in the topic of semiconductors, his podcast with Nenni is highly recommended. He even invited Nenni back for a follow up interview. These were about 2 hours each, BTW, but even though they were long I listened to every minute of them both.

    But a reviewer he is not, and it showed in this 3080ti review.

  • outrider42 said:

    Geometry is hard to mask in Iray. If you use Genesis at base resolution, it will be immediately obvious in almost any render unless these are far away from the camera or the render is very low resolution. However texture compression can work pretty well if done right, and may not be noticeable until very close to the camera. Though as I type this I often find myself using AI upscalers to upscale many environmental textures on various props and things for Daz. I see many floor textures that just do not hold up. However this is more because of the way the textures were originally made.

    There is another thing, too, if your cards can Nvlink (the 3090 is the only 3000 series that can), you can pool VRAM in Iray. However it cannot share all the data. The cards will pool texture data, but NOT geometry data. So you will be able to use a lot more memory, but only to a certain degree. I just find this interesting as we are discussing geometry at the moment. But this is another lesser known option for Iray users. Turing has a lot more Nvlink capable cards, the 2070 Super, 2080, 2080 Super, 2080ti, and Titan RTX can all Nvlink with second card to pool their memory this way. The second card must be the same model here, so that the Nvlink connector will fit. IMO Nvidia may have thought that Nvlink was too good, and moved to restrict it with Ampere, since the 3090 is the only one that can use it. Of course Quadro class cards can still use, at least the higher range ones. The low end Quadros do not have Nvlink, which I personally feel is pretty shocking.

    Back to Moore's Law, he kind of has a tendency to favor AMD in his videos. He does it somewhat subtly most of the time, just in how he phrases things and his tone, but sometimes he is not subtle at all like in this video. I don't think he even realizes how much he does it, and this is also likely a product of how his viewership reacts. A lot of people really do not like Nvidia right now, at least a lot of people who make comments and videos. If he was nicer to Nvidia he might actually lose viewers. And I can get that, it is literally hip to bash Nvidia. Nvidia has not been very consumer friendly to put it mildly in recent years. But that is not an excuse for people reviewing tech to lean one way or another in their reviews. Tom has at times spoke negatively about AMD, but he generally will soften any such blow by mentioning Nvidia doing something worse in the very next sentence, so that the negative AMD comment is glanced over.

    He is still a good leaker, that is his main calling card. He has built up a lot of contacts in the industry who help keep him informed of things that might be coming. A lot of his leaks are pretty accurate, and when they are off, it is often because the company decided to go in a different direction, not that the leak was necessarily wrong. But he is clearly not a reviewer, and at least he admits it as he suggests going to Hardware Unboxed and GamersNexus for better reviews. Plus most of his content is very gaming focused, though he does occasionally have some guests on to discuss servers and the broader tech industry, he generally looks at things from a gaming perspective. His interview with Daniel Nenni was really great, offering a lot of incite into the semiconductor industry, and clearing up a lot of misconceptions. If you are interested in the topic of semiconductors, his podcast with Nenni is highly recommended. He even invited Nenni back for a follow up interview. These were about 2 hours each, BTW, but even though they were long I listened to every minute of them both.

    But a reviewer he is not, and it showed in this 3080ti review.

    I'm not sure about the previous versions of Daz, but when I had asked one of Daz's staff members about NVLink, I was specifically told that NVLink does not currently work with Iray. When they told me this, I just assumed they meant it didn't work, period. Well, if it can't do it with the geometry, its kind of a waste frown

  • PerttiAPerttiA Posts: 10,024

    magog_a4eb71ab said:

    I'm not sure about the previous versions of Daz, but when I had asked one of Daz's staff members about NVLink, I was specifically told that NVLink does not currently work with Iray. When they told me this, I just assumed they meant it didn't work, period. Well, if it can't do it with the geometry, its kind of a waste frown

    Normally geometry is not a problem, textures are eating 10+ times more VRAM unless you are going crazy with SubD

  • outrider42outrider42 Posts: 3,679

    Nvlink support did come right away. It took quite a while, so depending on when you asked the answer may have been different. I was pretty sure we had a forum user or two test and verify it worked, but I cannot find the thread. The test was simple, build a scene that was not possible on a single GPU's VRAM and see if it renders with Nvlink.

    Various GPU monitoring software do not correctly report Nvlink GPUs, which can make this confusing because it is hard to observe exactly how much memory is in use with Nvlink.

    It is a bummer that it cannot share geometry, but like PerttiA said, texture data is usually the biggest VRAM hog. But this can vary a bit if you are using the highest subdivision available on models.

  • kyoto kidkyoto kid Posts: 41,244

    ..so what version of Daz was this implemented in? 

  • PerttiAPerttiA Posts: 10,024

    kyoto kid said:

    ..so what version of Daz was this implemented in? 

    It was there in summer 2020 at least, as it was one of the things I bought the 2070 Super for, thinking that I could get another one later when the Amperes were released... We all know what happened then and they even removed listing of 2070 Super from the stores some 6 months ago... 

  • kyoto kidkyoto kid Posts: 41,244

    ..would that be 4.12?

  • Richard HaseltineRichard Haseltine Posts: 102,718
    edited July 2021

    http://docs.daz3d.com/doku.php/public/software/dazstudio/4/change_log_4_12_1_118 looks like 4.12.1.109 was the first Public Build with the feature, if I am reading correctly - or 4.12.1.76

    Post edited by Richard Haseltine on
  • kyoto kidkyoto kid Posts: 41,244

    ...ah, oh well, I'm still on 4,12,0,47 as it's been so rock solid, even more so than the general release.. Not going to buggy 4.15.which also doesn't work as well with older GPUs.

  • outrider42 said:

    Nvlink support did come right away. It took quite a while, so depending on when you asked the answer may have been different. I was pretty sure we had a forum user or two test and verify it worked, but I cannot find the thread. The test was simple, build a scene that was not possible on a single GPU's VRAM and see if it renders with Nvlink.

    Various GPU monitoring software do not correctly report Nvlink GPUs, which can make this confusing because it is hard to observe exactly how much memory is in use with Nvlink.

    It is a bummer that it cannot share geometry, but like PerttiA said, texture data is usually the biggest VRAM hog. But this can vary a bit if you are using the highest subdivision available on models.

    It was only about 2-3 weeks ago because I was in the process of deciding whether to go with an RTX A5000 x2(~16400 CUDA Cores total /w 48Gb VRAM using NVLink) or single RTX A6000.

  • Gator_2236745Gator_2236745 Posts: 1,312

    Ghosty12 said:

    The problem here is not so much that the 3080ti is a bad card per se, it is just according to that video it is power hungry even at idle and supposedly does get rather warm due to the use of GDDR6X.. For me if a could I would go for a A4000 or A5000 on the expensive end and a 3060 12G on the cheap end, considering I am still rocking an awesome 1070ti..

    Well it looks like he's got some system issue as my 3090 idles under 40 watts, and GoOutlier's system too.  He took his experience and one other person's comments and jumped on it a little quick for that issue. 

    It doesn't really change his value sentiment which I agree with.

  • outrider42outrider42 Posts: 3,679

    magog_a4eb71ab said:

    outrider42 said:

    Nvlink support did come right away. It took quite a while, so depending on when you asked the answer may have been different. I was pretty sure we had a forum user or two test and verify it worked, but I cannot find the thread. The test was simple, build a scene that was not possible on a single GPU's VRAM and see if it renders with Nvlink.

    Various GPU monitoring software do not correctly report Nvlink GPUs, which can make this confusing because it is hard to observe exactly how much memory is in use with Nvlink.

    It is a bummer that it cannot share geometry, but like PerttiA said, texture data is usually the biggest VRAM hog. But this can vary a bit if you are using the highest subdivision available on models.

    It was only about 2-3 weeks ago because I was in the process of deciding whether to go with an RTX A5000 x2(~16400 CUDA Cores total /w 48Gb VRAM using NVLink) or single RTX A6000.

    Keep in mind you are going to need like 200GB of RAM to be able to use all that VRAM. I cannot speak from experience on that amount, but going by how much RAM I use, I do not think 128GB will be enough to populate all 48GB of VRAM. After all, if a user can run out of 64GB of RAM with only 17GB of VRAM in use, it stands to reason you would reach about 34GB of VRAM with 128GB of RAM. So you would fall considerably short of the full VRAM capacity.

    With Nvlink, because geometry is not shared, you would not get the full 48GB between two A5000, but if you can only use 34GB of VRAM anyway, that would be a logical choice. Of course if you have more than 128GB of RAM then this is less of an issue.

  • Ghosty12Ghosty12 Posts: 2,065

    scott762_948aec318a said:

    Ghosty12 said:

    The problem here is not so much that the 3080ti is a bad card per se, it is just according to that video it is power hungry even at idle and supposedly does get rather warm due to the use of GDDR6X.. For me if a could I would go for a A4000 or A5000 on the expensive end and a 3060 12G on the cheap end, considering I am still rocking an awesome 1070ti..

    Well it looks like he's got some system issue as my 3090 idles under 40 watts, and GoOutlier's system too.  He took his experience and one other person's comments and jumped on it a little quick for that issue. 

    It doesn't really change his value sentiment which I agree with.

     Interesting so it does make me wonder what Moore was doing to the card apart from I think OCing it.. And yeah if the 3080ti had 16 gig and the 3070ti had 12 gig or vram, then the response by some on why were these cards made would probably not have happened..

  • magog_a4eb71ab said:

    outrider42 said:

    Nvlink support did come right away. It took quite a while, so depending on when you asked the answer may have been different. I was pretty sure we had a forum user or two test and verify it worked, but I cannot find the thread. The test was simple, build a scene that was not possible on a single GPU's VRAM and see if it renders with Nvlink.

    Various GPU monitoring software do not correctly report Nvlink GPUs, which can make this confusing because it is hard to observe exactly how much memory is in use with Nvlink.

    It is a bummer that it cannot share geometry, but like PerttiA said, texture data is usually the biggest VRAM hog. But this can vary a bit if you are using the highest subdivision available on models.

    It was only about 2-3 weeks ago because I was in the process of deciding whether to go with an RTX A5000 x2(~16400 CUDA Cores total /w 48Gb VRAM using NVLink) or single RTX A6000.

    Keep in mind you are going to need like 200GB of RAM to be able to use all that VRAM. I cannot speak from experience on that amount, but going by how much RAM I use, I do not think 128GB will be enough to populate all 48GB of VRAM. After all, if a user can run out of 64GB of RAM with only 17GB of VRAM in use, it stands to reason you would reach about 34GB of VRAM with 128GB of RAM. So you would fall considerably short of the full VRAM capacity.

    With Nvlink, because geometry is not shared, you would not get the full 48GB between two A5000, but if you can only use 34GB of VRAM anyway, that would be a logical choice. Of course if you have more than 128GB of RAM then this is less of an issue.

    Just big baby steps for now. I can't & won't use the full 48 Gb VRAM. If & when I do get to that point, I'll be able to either upgrade to 256 Gb RAM(most likely choice) OR I'll have a proper render server built with something like this: https://www.asus.com/us/motherboards-components/motherboards/workstation/pro-ws-wrx80e-sage-se-wifi/ and I'll move the A6000 over to it and start filling PCIe slots as needed since bigger scenes need more CUDA Cores or render times can go up higher than an orbiting astronaut's arse. Plus a lot of stuff seems to be going the way of DForce, etc... I'm wondering if we're going to be seeing DForce skin on top of a skeletal frame soon in Daz! surprise

    Of course, by the time that ever comes about, I'm sure there will be a better equivalent, given how fast computer tech changes. Hopefully Daz keeps up with it.

Sign In or Register to comment.