Adding to Cart…

Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
Maybe this will help some people.
I've a AMD 3700x and 2x 2080ti. One of the 2080ti is with AiO, the other via fan.
I have 3 SSDs, some aditional case fans and a USB sound card.
If I render, the power which is needed is around 600W. I measure only the power cable to my computer. The 600W is without the screen or other stuff which doesn't get the power directly from the PC. There are no real peaks. It variate around 20 to 40W.
That means, with my 1000W PCU, I can easily upgrade to 2x 3090, after they have just 2x90W more than my current 2x2080ti.
The big problem is TDP is not really anything. AMD and Intel define it differently for their CPU's, and they both use formulas that let them arrive at numbers they want by setting one of the constants in the formula. Gamers Nexus did a video on this within the last year or so. I've never seen a discussion of Nvidia's TDP but I assume its roughly the same.
I use it as a rough gauge and that is it. But to hear something marketed as a 250W part drawing 400, without an OC, for more than a transient spike was alarming. That could cause serious problems.
If Ampere is an evolution of Turing, which is likely, and the same holds true then the 3080 could spike to roughly 512W. The 3090 could hit 560.
1 8 pin PCIE power connector is rated for 150W of power delivery. The PCIE slot can deliver 75W. So 3 x 8 pins can deliver 450 + 75 = 525 which could be below the power need of the 3090 and barely inside the 3080.
The Nvidia 12pin is only delivering 300W (it connects to 2 8 pins) so it can only provide a max of 375W.
Please do not pre order the FE cards. Don't pre order anything but those things look sketchy AF with these numbers.
If you read the statement posted by TheMysterIsThePoint it's obvious that the 400W quote was not the conclusion of stress test but an observation of a peak draw at some point. TDP might be useless in terms of maximum power draw but it's not the maxmium power draw that kills the machine but heat stress on the components from electrical current. Spiking from 250W to 400W is a factor of 1.6, not very unusal for a spike and within what decent PSU can handle. You would be shocked at the power spike of a lightbulb when you turn it on. It's not easy to drive a GPU power draw at maximum nonstop even at 100% usage reported by Windows which is why chipmakers publish TDP for heat dispation and system makers multiple TDP by 1.3-1.5 to source the PSU.
Edit: OEM actually don't use rule of thumb of 1.3-1.5xTDP to source PSU. They know exactly what the power draw they need from their PSU and never pay no more than they need, which is why DIY people think the PSU in prebuilt systems are garbage.
Just to add to my previous comment, I wouldn't be too stress out with the 400W power draw vs. 250W TDP. TDP is what will kill your GPU. In modern PSU if the GPU needs 400W and the PSU can only supply 250W, the PSU will simply not sending electric current to the GPU. The worst that can happen is that the PC won't start or just shut itself down.
You're suitably cautious I see; unlike some, assuring themselves they're fine due to basing their decission on previous generation cards.
Edit for typos and spelling
Imo as long as your PSU is at least the officially recommended published wattage for the card you're planning to get and you aren't planning to use it in a multi-GPU system you have virtually nothing to worry about. As already mentioned, TDP stas from the likes of Nvidia/AMD/Intel are technically meaningless because the underlying math used to calculate them is constantly subject to change without notice on a company-specific basis. Recommended minimum PSU wattage is where it's at, because that's the statistic eg. Nvidia can't cheat about without subjecting itself to numerous golden lawsuit opportunities. And - as also already mentioned - over-current protection in PSUs is a given once wattages go up anyway, so...
Regarding concerns about Founders Edition cards specifically - I wouldn't worry about that either. The reason for these cards having a non-standard 12-pin minature power connector is to accomodate the relatively novel cooler + reduced size PCB design. Hence why AIBs aren't using it in any of their so-far announced designs. Because they aren't working with the space restrictions that make it necessary. Not because of them resisiting some overarching plot from Nvidia to force everyone into using non-standard connectors (as romantic as that sounds.)
If access to aftermarket cooling is a concern (it is for me), both AIB and FE PCBs already have waterblocks coming out for them. The only serious questionmark (for me) is whether the 3090 will have TCC driver support and interoperability with Turing era NVLink (Ampere's NVLink is a newer generation of the standard.) That and the actual performance numbers, of course.
I put 'virtually' in bold for you, because you were not giving folks blanket assurance, but leaving yourself a little wiggle room - just in case.
If you plan on building your own PC don't skimp on PSU. I learned it early the hard way that the quality is far more important than the watt rating. Multiply the system's TDP sum by 1.5 would be sufficient for anyone not doing crazy overclock. The natural death of PSU is a slow death of its ability to manage less and less power. Of same output and workmanship, the cheaper ones almost always die earlier. Not enough power your PC won't start, dirty power puts more stress on the components and lead to ealier faiulre. Both are nightmares to dianose when you are having problems with your PC. GPU lasts 3-5 years for most people, additional $50 investment on PSU spreading over 10-15 years is the best investment for DIY PC builders. This is a good but long article to read about PSU in PC: https://www.tomshardware.com/reviews/power-supplies-101,4193.html
PS there is a perception that prebuild PC has terrible PSU and some people recommend immediately replacing it. This is not a sound advice to spend money unless you plan on swapping out a lot of other components but in which case there is no reason buying a prebbuilt PC. Big reputable OEM like HP and especially the old IBM actually has very high standard in their PSU procurement (I was told by the CEO of a one of the largest PSU makers). The problem is that the power output need of PSU in prebuild machines are very prevcise and PSU are sold by watt so OEM don't pay more than the machine needs. When people start swapping components, the new one often needs more power beyond the original PSU's capability. A good example would be RTX 3080. The PSU on a prebuilt Dell or HP with 3080 is very likely lower output than the 750W suggested by NVidia. That is OK because HP engineers know exactly how much the system draws, but adding another 64GB DRAM yourself might lead to an early death (most likely not since 64GB needs less than 20W). That's not because the PSU is garbage, although it still could be in the case in brands that need to compete on price, it is because it was handling more than it should.
Just watched this Moore's Law is Dead video on what is possibly coming and it is interesting.. It it mentioned that Nvidia is being a bit naughty and that AiB's are likely having to charge more for their versions of Ampere.. And that the main part of the video is about the new Quadro's and a possible Titan card next year..
...hmm, a Quadro 6000 with 48 GB. So will the 5000 be boosted to 24 GB? .
...and what about the 8000? 72 GB? 96 GB?
Just read that nVidia has just aquired ARM (I know this is old news as there have been rumors around that it would do this). Whilst this news is not directly relevant to this discussion about nVidia GPUs, I wonder if it will change the relationship between Apple and nVidia, since naturally Apple uses ARM in its iPhones and will soon be using them in its Mac OS machines as well.
If I recall the original spat between Apple and nVidia was around licensing fees, so it seems ironic that Apple will now be playing ARM related licensing fees to nVidia.
Who knows? Apple's decision making is opaque. Do not think for one second they will change their mind about having any silicon but their own in their devices. The walls around the garden are going to be very high indeed.
You keep making claims at variance with reality and at variance with the sites rules.
Will have to wait and see what Nvidia do next year, could get interesting that is for sure.. I am hoping with what AMD have up their sleeves, makes Nvidia release the Ti cards as will be cool to get a 16 Gig 3070..
And this is why I didn't rule out a new TITAN. The full fat wasn't unveiled yet. Not saying this is conformation of anything, but I fully expect a TITAN based on Ampere next year.
The 3090 seems to be a Titan, which of course, given the name means the could be a baby Titan (3090) and Titan Titan - with the Titan Titan being priced accordingly - and considering what was said at the presentation, named something else.
This news makes me very hopeful ARM GPUs will get nVidia GPU features at ultra low power usage!
I have mixed feelings about the whole ARM thing. The Apple situation aside, while I think Nvidia can pump a lot of new life into the ARM CPUs if they want to, well Nvidia can change course on the whole 'open license' thing at any time. And if Nvidia decides to do the PC CPU thing, well that's another whole can of worms. Validating for Intel and AMD is a fair amount of work already, having to validate for a third major brand...
Sure, people are sorta kinda validating for ARM already, but it's been a bit more of a niche/Google/Amazon thing up till now.
Back in the day, when people almost took Via seriously with their Socket 7 chips and such, well Via didn't have the horsepower to become a major player. Via IS still around, but mainly plays in the Asian market. In this case, Nvidia certainly does have a lot of monetary and talent horsepower, and could knock AMD, or maybe Intel if they keep being inept, to #3 on the CPU front... IThe sleeping Intel giant is very much awake now, so they will do everything they can to get back on top of the CPU game. Yeah yeah they are techically very much still #1 ATM as far as sales, but they've been on autopilot for a number of years now, and AMD hasn't been standing still, and is steadily gaining market share.
So yeah, ARM lives on, in what form Nvidia chooses I guess we'll see.
??? Everything @i53570k said here makes perfect logical sense.
Being logical and believable doesn't make it correct.
The idea that Dell or HP have engineers involved anywhere near specing prebuilts or that they use high quality PSU's is not connected to reality. Further making such claims, or apparently any such positive claim about any entity that you cannot prove, is a violation of the TOS, as the mods have pointed out to me on numerous occasions.
...a new Titan will need to offer something more than the 3090, like at least 32 GB or 48 GB of VRAM, possibly or HBM memory, as well as possibly a higher core count to make people want to shell out close to 2,500$ - 3,000$ or whatever it will end up priced at.
Deleted
Guys, guys, guys, (and maybe girls?). What we have her is a faiure to communicate. Everyone knows tha Dell & HP among others build the highest quality equipment possible using cvomponents sourced by the lowest bidder (TONGUE FIRMLY STUCK IN CHEEK HERE).
Can we quit beating up each other and just agree that if you are sticking a new card in your system and it still has the oem PSU that you need to stick an 850 Watt PSU in you box and lets get back to speculatiing aobut all the latest rumors. Only 3 days to go and this thread becomes redundent.
Then everyone is wrong.
Work has stopped purchasing laptops by Dell for their consistently poor quality. Their monitors are great, their laptops: nope.
This is the issue. If Nvidia is doing a 48GB Quadro, and the 3090 has 24GB, what purpose does a Titan serve with either one of these VRAM counts? If it has 24GB, but costs $1000+ more than a 3090, then it would have nothing to offer other than TCC mode and couple other features. As it is, we do not yet know if the 3090 has TCC or not. Though I will say that the name implies it does not. I know Jenson said this was a Titan type of card...OK...so why not call it a Titan then? It doesn't make any sense.
And if they do a Titan with 48GB, then it competes with the rumored Quadro (and any Quadro, as 48GB is already top tier Quadro). A Titan will offer several key Quadro features, though it is important to note that it doesn't offer all Quadro features. Even so, it would be so close to the Quadro that this would not make any sense for most use cases.
The other issue is the core counts. The rumored Quadro only has a couple hundred more cores than the 3090. The same would apply to the Titan as well, and this is because the 3090 is already so close to the full die. That is a tiny 2% difference in core counts. We need to remember that these Ampere CUDA cores are not quite the same as past CUDA cores. They are around 33% weaker (possibly more) when you factor the performance that Nvidia has shown us so far. So the Titan or Quadro would only be like 1% faster in actual performance. Depending on clock speeds, they might actually be slower! Quadros are often more conservative on clocks, so that could be the case.
So then, where does a possible Titan even fit into the Ampere lineup?
The only thing that would make sense to me would be splitting the difference and doing a 32GB card, or something in that range. I am not sure how doable that would be as it may need a different memory configuration.
So this is why I wonder if Titan even gets released. If we do get a Titan, it may be quite far off, like a year. This would give Nvidia time to sell those expensive Quadros, and about one year would be logical time to put out a card that beats the Quadro.
In other words, don't expect a Titan too soon. Maybe we get lucky and get one in the Spring, that would match the time frame for the 1080ti's release after the 1080. But the difference there was that Quadro Pascal also released near the 1080 and it had 24GB compared to the Titan XP's 12 and 1080ti's 11. So the Quadro released much earlier and it kept its place in the lineup. With Ampere, the Titan doesn't seem to have a good place yet.
The 3090 will not have the validation that the Quadros will have. It might work fine in stuff like Solidworks, it almost certainly will, but unless the card is validated there are companies that will not use it. Some companies won't even give you CS unless the hardware you're using is validated.
That's why Nvidia could sell the RTX Titan and the RTX Quadro 6000 last gen. Same amount of VRAM and CUDA but almost double the price for the Quadro.
People get fixated on the top of the stack Quadros but there are generally 4 or more Quadros in the line and they match up to the consumer stack, they use functionally the same chips. They generally down clock slightly, use standard GDDR instead of GDDRX VRAM (the GDDRX VRAM is only made by Micron and is not part of the GDDR standard and that makes validating it a no go), and come with more VRAM (usually) for a lot more money.
I said my tongue was stuck firmly in my cheek. Because the highest quality equipment being built using low bidder components really isn't very high quality.