Adding to Cart…

Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
I've mentioned this on here before (not sure if it was in this thread) - Nvidia's longterm game is to take advantage of the recently increasing PCI-E lane bandwidths and hi-speed SSDs to develop their GPUDirect intercommunication system (previously a datacenter/IBM Power9 CPU family only affair) into a generalized direct-from-disk asset loading mechanism currently being branded as RTX IO.
What this means in practical terms (assumnig the technology pans out as expected - starting next year) is that many end-user focused asset-heavy GPU accelerated tasks like gaming are going to need dramatically less on-GPU storage (in theory, only as much as is needed to store the finalized game assets and framebuffer for a single frame) than what's needed right now - for the same or better level of overall performance. Hence why these 16/20GB Ampere card variants have never made any sense apart from being a counter AMD marketing ploy.
Long term though, this isn't now. We neither have the hardware nor the games to make use of this. I do agree though, 20GB is just not really needed for games at the moment either.
Well, most new products don't get announced until close to vendor stock date not matter what the product. Vendors know about the products before those press releases for planning reasons though.
Dang! That changes all my plans but notice that the 1st sentence in that article "NVIDIA has just told its board partners that it will not launch GeForce RTX 3080 20GB and RTX 3070 16GB cards as planned."
So they have to reengineer those cards and release them under different SKUs at a later date in 2021. Those current 30X0 cards must be pretty disastrous. I don't know why but I assumed that these GPU / CPU makers thoroughly tested many of their new design cards just as they'd be in consumers hands well before consumers ever got them but I must be wrong.
I think nVidia will be releasing 16GB & 20GB 30X0s in 2021, maybe even with options for more RAM; but maybe I buy the cheapest 3060 available for the software stack that uses it & a Biig Navi card with 16GB instead.
They are in the business of making money.
If the bean counters can save 2 cents per 1000 chips, a chip that doesn't really function will be used instead of fully functional (proven) one.
The effect of the problems will be minimised with drivers as far as it can be done and product released to the public.
I have a suspicion that nVidia itself is holding off on 16/20GB models, but the board partners will be releasing them. We won't see them as nVidia brand FEs. The partners have more freedom with board design regarding memory module space and placement, and as a bonus nVidia themselves take less heat from Ampere early adopters by not "refreshing" FEs 3-4 months after launch. The FE 8/10GB models will maintain the MSRP baseline, have the better performace / cost, and cause less confusion for the non-enthusiast segment. Just a hunch.
Obviously there is lot more to it than the RT cores, but that is the the most glaring difference. Did I say that RT cores were the only difference? Jeez dude. Do you really think that 826mm2 chip has something cut out or disabled? The A100 is purpose built for the tasks it does, it has no RT cores, period. And RT cores are not so tiny. They take a large portion of every SM.
On the left is the A100 SM, on the right is the 3080/3090 SM. These are completely different chips. They have similarities, but there are a vast number of changes between them. They are very different architectures sharing the same name.
Look how large that big yellow block is on the 3080, that is your RT core. That yellow block is far larger than the individual CUDA and Tensor cores. You could probably shove at least 4 CUDA and Tensor cores comfortably in that space. So if A100 has disabled RT cores, yikes, I would be raging mad as a A100 buyer. That is a massive waste of unused die space, which would also be horribly inefficient.
By contrast, the A100 has a much larger space for Tensor, and this makes sense given how important machine learning is to A100 and those who would buy one. A100 also has a fair amount of space dedicated to FP64 units, something that the gaming cards lack.
So it makes sense that A100 can be done at TSMC and gaming Ampere at Samsung, because these are totally different chips.
Keep in mind that the diagrams you are referencing - while meant to convey the relative importance of individual chip components - are not necessarily drawn to physical scale. To ascertain that, you'd really need to be looking at very high resolution versions of these:
AIB's can release nothing Nvidia does not ok. If Nvidia cancels an SKU the AIB's can't release them even if they have them built, boxed and ready to ship.
Exactly block diagrams bear little if any relationship to the actual transistors.
I'm not suggesting it would happen behind nVidia's back. If it does happen, I'm saying that is was planned for them to be released by nVidia and all partners, but nVidia is deciding not to do their own, allowing partners to go ahead with it. Yes, articles mention confirmed "canceled nVidia SKUs", but are those just FE SKUs? We'll see.
Even without a block diagram, or any picture at all for that matter, these are completely different chips...A100 has no RT cores, but has FP64 instead, and it has additional Tensor. These facts by themselves should be enough to understand how different these chips are. Nvidia cannot just go and cut out a gaming card out of A100. They could if they really wanted to, but without any RT cores that would be silly.
The wording on this does leave the door open. When the name Nvidia is used like this, they tend to mean 1st party Nvidia. Obviously these are Nvidia GPUs, that would be like saying "Nvidia is cancelling Nvidia GPUs". That would be redundant. This could be a smokescreen to throw off people for a while, so that they can surprise people with an actual release. The rumor has stated for a long time that the extra VRAM cards would be 3rd party AIB only, this leaked statement does not necessarily conflict with that.
But I always said from the start that there was no guarantee the extra VRAM cards would release. You cannot depend on these things happening. Ultimately they are only rumors and speculation. All of the stuff we have discussed here can be completely wrong.
It's fun to talk, but I don't think we should again try to divine what NVidia is going to do in the future. It is not difficult to imagine NVidia having a risk mitigation plan in place that allows them to switch processes. I mean, again, this is a billion dollar company we're talking about, and I would fire Jensen if he didn't consider the risks related to ever evolving process technology. I can't imagine a competent semiconductor CEO not having hedged their bet. I've never studied business, but if I had, I would think that would be a freshman level course...
Jensen never studied business thankfully. He and the other founders did bring in some business guys but he still runs the company without them over ruling him.
You can't possibly know that.
This is well known. Things like Jensen setting the MSRP of cards before going on stage for the GPU announcement. No business guy wants things like that.
Pretty sure Jensen doesn't pull a number out of an orifice when the showrunner says "10 seconds 'til you're on..." Pretty sure he uses some type of valid model, even if it only exists in his head. Even if that ever happened, the more likely scenario is that NVidia's droves of MBA/Statisticians produced a number of distinct pricing models that gave different solutions, and Jensen chose the one in which he personally had the most confidence. That is believable.
What is not believable is that Jensen would not have a mitigation plan for such a major risk as Samsung 8nm not working out.
In any case, there is no way for anyone here to know. But it's till fun to talk about it.
Then it isn't announced. One doesn't plan their purchasing based on what might appear, but on what is available or has at least been 'announced'.
Rumours are useful, but they it should be remembered they are only rumours.
Considering that there are AIB's, including one that is bigger than Nvidia, that has all but said they will drop them if this keeps happening I think you greatly underestimate just how seat of his pants Jensen is flying.
Just look at this launch. They got so much blowback from their primary partners, which was absolutely predictable to anyone who thought about it, that they have switched things around less than 6 weeks after launch. They won't sell FE cards directly any more, Best Buy will be the only source if there are any more made which is questionable. That means the AIB's won't have to deal with the MSRP that is below cost for them on store shelves and at Amazon and NewEgg any more. Jensen knew when he made the launch announcement what the AIB's considered the floor for the 3080 and he still went under it. What business guy would have ever OK'd that? Do you have any idea how much Nvidia makes from the AIB's? Do you know how much they will lose next year by undercutting the market?
Exactly. Rumors are useful. Had NVidia anounced 20 gig card, they would have surrendered the right to cancel it. But on the other hand if the information makes its way to some stooge on Youtube, well, NVidia has plausible deniability.
It still amazes me that some people think, what, that these Youtubers or their sources are industrial spies and know anything beyond what the companies want them to know? What do they think, say, a 2 trillion dollar company like Apple would do to a guy youtubing out of his guest room that actually interfered with their product launch?
Investigative journalism? A lot of people work at Nvidia. Lots of potential sources.
Agreed, but until it's official, it doesn't exist.
Nothing. The bad press would be so much worse than any hit they'd take from a bad leak that it just isn't worth it. They might, might, fire the guy who did the leaking but try and do something to a press guy? So that he and everyone else in the tech media can spend weeks writing and talking about it?
Have been keeping an eye out for reviews on the gigabyte 3090 blower version and this popped up
4x 3090 blowers in a single machine
https://www.pugetsystems.com/labs/articles/Quad-GeForce-RTX-3090-in-a-desktop—-Does-it-work-1935/
Those temps look good but I wouldn't want to pay the power bill. Then again, if you're rendering so much quicker you would't be running them for so long.
Yeah i think the kind of people in the market for a rig like that, the power bill is the least of their worries
i wanted to know what the thermals would be like, given these are 350w (and more) cards. I read a LOT of 'expert' opinions on the internet saying that this blower card would be thermal throttling e-waste, many of which had some unkind things to say about Gigabyte for even manufacturing it, because there was absolutely no way in these peoples infinite experience that a blower could possibly cool these cards.
Myself, i was sure that Gigabyte would not release a product that simply didnt work due to thermals. As it turns out, one of these blower cards by itself actually runs cooler than many of the traditional cooler versions out there right now. My bigger concerns were how well it would hold up in multi-gpu configs, and to a lesser extent how noisy it is. It holds up well beyond expectations in a multi-gpu config. And i think the video demo they did about the noise made it sound worse than it is due to the amount of ambient noise in the environment they were testing in. But even so, i would prefer it being loud with thermals under control than be quiet and have major thermal issues due to the amount of heat being blown into the case that a traditional cooler setup on these cards would cause in a multi-gpu config. My machine is already pretty loud when i have all my case fans turned up.
Yes, you would think that, but since rendering is the bottleneck, rendering quicker has meant to me that I just render twice as much :) This hobby has made a $100 difference in my electric bill from a single 960 to 4 2080tis.
I've run 4 gigabyte 2080ti blowers for the better part of two years and have had zero problems. I plan to buy 3090 blowers from gigabyte again. The only question is how many.
It's their motherboards that I am never going to touch again... My Gaming 7 was a nightmare and I got no help at all from them. Certain problems were never resolved and I just worked around them.
Deleted double post
Point taken. GPU fan noise isn't an issue that can be ingored.. as seen in the video. My MSI 2080 is barely audible even under stress and that's the way I like it. I keep my case open which means I need to clean the components of dust regularly. Listening to loud fans spinning insanely would stress me out no-end.
Im surprised you have had isues with their motherboards. In every machine i have built, i have only used 2 motherboard manufacturers - Gigabyte and Asus. Probably equally, 4-5 each. I believe Asus are pretty much regarded as making the best motherboards, and out of all motherboards i have used in my personal machines, i have had issues with 2 of them, both of them Asus boards. Maybe they were not even issues as such, maybe a better definition would be a strange gimmick that two of them had. Sometimes, seemingly randomly, they would not post on the first boot attempt. it would ALWAYS post on the second attempt though.
So nothing dealbreaking, but something i never understood. But never had any gimmicks or issues with any of the gigabyte boards i have used. My current one is a gigabyte trx40 master and 0 issues so far
Gigabyte and Asus are the only two GPU manufacturers i buy too, and funnily enough, never had an issue with a gigabyte gpu, but i have an Asus 2060 blower at the moment that sometimes generates a ticking noise in the fan that i can then stop by getting in there and wiggling the fan a bit (while off of course)
I guess I should add that I didn't have any problems until I crammed 4 GPUs into my Gaming 7. My hypothesis is that those types of configurations are just not tested as well because they're kind of fringe. Wierd things started to happen, BIOS updates would solve one problem and cause another, and there was no one with a similar configuration to provide the wisdom of his prior experience.
If I build a homebuilt, I'm going to go with the precise configuration of the reviewer.