Adding to Cart…

Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
I always think of the remark "The Truth is Stranger Than Fiction" in these situations: It's something akin to the rule 34 for the internet (if it exists in real life there is porn of it - or something like that); in this case, If you've read a book, seen a film/anime about it or whatever work of fiction it was - remove the currently impossible aspects and it's likely happened somewhere/somewhen.
Well regarding getting much more VRAM sooner rather than later as anyone that's rendered a complex DAZ scene in iRay knows from having their scene kicked out of VRAM to CPU render because of size if nVidia are serious about ray tracing in their GPUs those GPUs must have the VRAM to support that. And with AMD adding hardware ray tracing AMD must as well. And both of those are nice because that means when GPUs have an overabundance of VRAM for 99.9999999 % of other GPU tasks so now you can buy GPUs based strictly off of performance now if you don't need raytracing.
I think GPU efficiency is starting to stifle and more VRAM, wider busses, and smaller dies are where they are going to have to push for performance the next few years. There comes a time when the most efficient algorithms for GPUs are done relative to ROI and they are nearly there. I mean we don't expect them to add entire physics engines (some stuff is already added) next do we? Well maybe they will.
I'm speculating here, but seeing that Nvidia currently has video cards with 48GB of VRAM, there's no reason that they couldn't move the Titans up to say 32GB if they wanted to. And with AMD routinely releasing 16GB cards for less than $2000 over the last couple of years...
And I can't think of a reason not to expect a professional card with more than 48 GB of VRAM that might be waiting in the wings for say a early-mid 2021 release, which would make it easier to move the entire product stack up on the ram count. The thing is, the more VRAM that Nvidia and AMD put on these cards, the more that power users like content creators and deep learning types can find find uses for them. Not sure how the HBM2 stacking limits tie into that, but 64 GB sounds doable to me. Of course, that 64 GB card would cost serious money, but for movie studios and top tier data scientists yeah they probably can afford it...
Even those 32GB Radeon Instinct M150 cards aren't cheap ($5K range).
I recently downloaded a new Daz product package that was almost 2GB in size. I haven't tried loading that product yet (some reactor room), but I expect it'll have a lot of VRAM overhead... 11GB is old news at this point, so Nvidia really does need to move the ram count up a little bit at least IMHO.
I know that they'd rather we all buy Titans, but the fact is a lot of people here are making do with lesser cards.
I'm also interested if AMD may decide to jump on the 'more than 32GB bandwagon' once Big Navi is released. Not for gamers of course, but for the high end 'pro' solutions. Looking at their product stack, they currently max out at 32GB.
But I digress. As I said in the OP, it's all speculation until benchmarks start leaking out through seives or we see the official presentation... even then there's always the chance of a paper launch, but I'm not expecting one of those for a good chunk of the upcoming Nvidia product stack ATM.
Confirmation implies it's true which isn't the case. It's just another clickbait article based on some forum post just like other rumor articles.
Well, they gotta sell ads somehow.
Completely agree with you. I think some of the earlier speculation sounds more realistic.
TITAN Ampere - 24GB
3080Ti - 12GB
3080 - 10GB
Nvidia has supported entire physics engines on GPU's for quite some time. PhysX, Hairworks and I think one I can't remember the name of. The big issue, in the gaming world at least, is that since they are proprietary that mean that people with AMD cards get much worse look or performance when playing the game. IIRC it was Witcher 3 that had a bunch of these features and looks amazing on a good Nvidia card with everything turned on and just awaful on any AMD one.
Agreed.. then that leaves the 48GB's for the Quadros.. while they can still say the 3080ti is a step up with the extra 1GB of RAM... yet justfy the price premiums for the Titans.
Curious to see how it all plays out, I must admit I'm looking at Titans this go around.
...Jack, thank you. My thoughts as well. Gaming is still the primary market for consumer grade GPUs where frame rate is more important. We are a small niche in comparison.
About the only way that gulf would be maintained based on the "!leak" in the OP if if the Titan went to 32 GB, but then, that would also mean the Quadro and Tesla lines getting another VRAM boost as well to maintain the gap between "prosumer" and "professional"..
BTW, love the bonnet.
Don't be shocked if you see a split in the Quadro stack. While Nvidia charges a premium for the validations done on Quadros the quantity of VRAM does add cost on those cards and for many of the AI tasks the RTX 6000 and 8000 are doing it's just not needed. As our customers get more knowledgeable about the hardware they have started asking for the lower SKU's, we've seen more requests for the 5000's in the last 6 months even though it is only 2/3rds the CUDA of the 6000/8000. With these Epyc Rome systems with more PCIE lanes than God who cares how many cards you need if VRAM per card isn't an issue.
While Nvidia certainly wants to sell those Ampere racks, we've been trying to order them for some of our customers, we are also going to want so Ampere Quadros and there is demand for some with lots of Cuda and not so much VRAM. I'm sure there is still demand for ones with all the VRAM too so it would make sense for Nvidia to split their product stack but who knows. I can never figure out what Nvidia is doing in the enterprise space.
Heh, ancient news. This sort of debauchery has been part of the US since the early days of the railroad, coal, and steel industries, which has been practiced & refined long before the US ever existed. Wherever there was wealth to be made, rest assured there's always someone or some people getting deceived, robbed, and/or outright steamrolled.
We'll be lucky if there's 2Gb of additional VRAM if any on the Ti and below... I wouldn't expect them to close the gap much between the Ti & Titan either in regards to VRAM without a major price hike on a Ti; especially if it results in two of them outperforming a Titan while having the same VRAM pool. If I had to guess, they'll increase the CUDA cores by 25-30% while being tight with the VRAM.
If there is not a nice step up in VRAM, I will be sitting this release out. Maybe grab a used 2080 super to match what I have, if I can find a decent deal.
Rumormill here, but on one of the youtube tech channels they are suggesting that the cards might be the 2100 series, keeping in line with it being the 21st anniversary of the Geforce cards, but all of this is speculation and rumor.. Since everyone else is still suggesting they will be 3000 series cards..
Nvidia has trademarked various iterations of 30xx and registered those names in the EU as well so it seems unlikely they will use 21xx
Ahh k, yeah I wondered if that was the case as pretty much everyone had been saying the 3000 series for the new cards.. So not sure what made this one tech channel thing that Nvidia would name them something else..
Yes, my point exactly... I just wanted to point out that the level of sophistication of Competitive Intelligence is at the same level as National Security Intelligence because they're the same thing, practiced with the same tactics, techniques and procedures, by the same people.
I agree. I'll even go as far as to say that the correspondence you have observed is too uncanny for it not to be: A writer starts with the "Truth", and adds some currently impossible aspects, and thus arrives at his "Fiction" :)
While researching the First Crusades for my animation project, I've learned things that I just couldn't have made up.
Companies do that. Since all of the trademarks are public information, they trademark a bunch of things. Hiding the truth in plain sight, if you will. The only thing we can be certain of is that it's one of those things.
As PC gamer I hate saying this but console will set the baseline for VRAM. The new consoles both have 12+GB of VRAM for graphics and that will be the baseline for the high end cards to start for this coming gen. You can't sell $1000-1500 GPU against $500-600 consoles if it has inferior spec on paper, and memory is number one on the checklist. IF the top end Nvidia does not start with 16GB, once console ports start hitting PC we will see the spec quickly get bumpped to 16GB in the "Super" or "Ti" whatever iterations, if not more, as ports always consume more resources than native console games even not counting ports getting higher resolution assets that won't fit in the console natives.
We will see sizable jump in VRAM for this gen. It may not be in three weeks but it will come. After that we will have to wait another 5-6 years...
Dude, Nvidia is launching before AMD to beat them to the punch. What Nvidia is doing is a primitive strike. They are striking first, and by doing so, AMD will not be able to claim any superior hardware at all. If AMD had managed to get to market first, they would have had a GPU faster than the 2080ti. Even if Nvidia was launching Ampere the next month, AMD would still be able to say they had the fastest consumer graphics card in the world. It has been a very long time since AMD has been able to make such a claim at all. It would be huge news all around the tech industry, and do not doubt for a single second that people would be buying whatever they had. AMD has made big strides and some people are just tired of Nvidia.
Instead, by launching first, Nvidia will leap frog AMD before they even get out of the gate. This is a basic strategy of war, attack before they can attack you. The tech conversation will be focused on Ampere more than AMD, even when AMD finally launches. And unless AMD can outright beat Nvidia, their sales will be lower than if they had launched before Nvidia did. People are waiting for the next generation, and ready to snatch something up.
And yes Nvidia is concerned about consoles. Sure, Turing is faster than a Xbox Series X, but only with a $1200 piece of hardware! The 2080 Super is a pretty penny, too. That doesn't include the rest of the PC! A Series X will not require you to buy anything else, except obviously games. So a 3070 beating a PS5 or Series X...well it better! And lets not forget the 2070 was $600 at launch. So you are comparing what may be a $600 GPU to a fully fledged box that is a complete package. Also, everybody knows that because consoles are designed to be gaming machines from the ground up, they can punch above their class and do more than a supposedly equivalent PC. Consoles are not weighed down by things like Windows or motherboard designs. The CPU and GPU are on a single package together in a chiplet design with pure VRAM right there close by. On a PC these things are physically separated and all that data has to travel across various buses and things. Consoles remove much of this congestion. Just look at Last of Us 2 on PS4 Pro, which is only 4 teraflops. Imagine what they can do with more than double that power, plus SSD and ray tracing.
Did you not see the Unreal 5 demo that was confirmed to be running on PS5 hardware? There are not many gaming PCs that can handle what was shown there, and the ones that can certainly cost a LOT more than a console.
There are going to be people who decide to buy a console because the GPUs that are better than consoles are outrageously expensive, and good gaming PCs can be very costly. This is one of the main selling points of consoles in the first place! Remember, a number of years ago PC gaming was basically left for dead. But it began to make a comeback. Why? Because the prices of a building a game machine came down. This was during the 4 core era, and almost any CPU was good enough to play video games. The Xbox 360 and PS3 had been out for ages, and gamers were hungry for better hardware. Then the Xbox One and PS4 were kind of weak, so you could easily build a PC that beat them for roughly the same price as those consoles were. You could build a PC with a i3 and a 750ti that was about the same. A 750ti!
Think about that. A 750ti was all it took to match a console. The 750ti was less than $150 brand new, and was sold for as little as $120. PC gamine made a huge comeback, and cheap hardware was the biggest reason why. Fast forward to now...you need to drop what, $800 to $900 to match the Series X??? That is not cheap anymore! And if the 3070 matches the PS5, Like I said before, that's probably a $500-600 card by itself. You cannot build a PC that matches the consoles in both price and performance. It is just not possible like it was before. So sure, you can match the performance...but not the price.
If Nvidia is not worried about consoles, they seriously should be. They need to not just be faster than consoles, because of their prices, they need to be laughably faster than consoles. The 3070 can't just match a PS5, it needs to beat it by a wide margin.
Just FYI, Halo's multiplayer mode is already confirmed by MS themselves to run at 4K 120 frames per second. So...yeah, there will be 120 FPS games out there. It is not just a bullet point for marketing. I can imagine a lot of first person shooters and other eports like games will target 4K 120. It is going to happen and be a thing.
Virtual RAM has been around a long time, but it has never been this fast or quickly accessible. The throughput speed is only PART of the equation. Both consoles are able to access the data on SSD faster than a PC can. You cannot plug up a SSD that is as fast as a PS5's and expect it to be as fast in practical use. PCs are simply not designed that way. While Nvidia is developing Nvcache and AMD has their version, these are still not the same as the streamlined design that the consoles will have.
Let me give you another example of how far the console VRAM will go. Most PS4 games designed their games to run with about 30 seconds of gameplay in their VRAM. Ok. For the PS5, they are targeting 1 to 2 seconds of gameplay. Let that sink in. They are only using a mere couple seconds of gameplay in that 13.5 GB VRAM. That is crazy efficient. What this means is that the games will be incredibly streamlined. The game files themselves will take up less data because current games must duplicate their data all over the hard drive so that it can be found faster. And then you add in the fact that they only need a couple seconds in VRAM at a time? Where do you think the data is coming from then, if it is not in VRAM? It is not coming from some magical place, it is coming from the SSD. The PS5 may be using virtual RAM, but it is doing in a way that no PC has done before, or is even capable of doing.
What this means is that Sony could make games that are just impossible on modern PC, even if the PC has SSD. There is literally no way around this...unless you increase VRAM.
They demonstrated this with Ratchet and Clank in the PS5 show. You now have an ability that open portals to other worlds at any time. And not just a portal to one other world, it can open portals to a wide variety of entirely different worlds, with different enemies, different AI, different everything. And it can do this instantly during the game. There is no PC game that can match this exactly. It would need to be able to store ALL of that data for every single area available to the portal in VRAM at the same time. It should not take much effort to see that this would require a LOT of VRAM to be able to do. That is why way back when Steam's Portal came out, you could only use a portal back to a specific place. There were only two locations possible at any time, and the locations were very similar, often the same map even. It is nothing like what this new game is doing.
In this video, the dev explains in very clear language that the SSD is why they able to load these new worlds on the fly. You can call it virtual RAM or whatever you like, the fact is that they are using this SSD like RAM. That is by definition, virtual RAM, LOL. But unlike PC which resorts to virtual RAM when actual VRAM and RAM is low, the PS5 is using virtual RAM constantly. You know, like RAM. :)
I never really thought of it so deeply as you just did, but yes, since PCs will never have the kind of throughput that the PS5's proprietary SSD has because it's got to be on a PCI bus, they've got to do something to account for it so as to not cause a differential in PC vs Console performance. More VRAM on consumer cards seems like the obvious answer.
I don't really care if we get more VRAM for rendering but not because of rendering per se :)
It's possible that AMD cannot launch big Navi before Ampere because of consoles. 7nm capacity is very tight (probably not anymore now that Trump has banned Huawei from using it) and AMD needs to priortize wafers for Sony and Microsoft. Also it's not a good idea to steal your bigget customers' thunder to announce GPU ahead of consoles launches telling the world how much faster it is than consoles.
AMD ended up soaking up some of that excess capacity, in part related to Apple moving mobile chip production to 5nm, according to reports. Nonetheless, yeah they are currently struggling to keep up with Renoir demand at the moment. Renoir has a lot of similarities to the mobile chips, but I wouldn't be surprised if AMD isn't able to re-allocate the fab lines that are currently cranking out console chips, even though they grabbed more capacity due to commitments to console makers.
Plus, Vermeer is due to launch soon, so that is no doubt also soaking up a good chunk of AMD's allocation. That being said, in past rounds of console production the numbers of console chips that were being produced, as compared to desktop/laptop/server chips, were simply massive by comparison. That division has been showing losses over the last 2-3 quarters as console sales have dried up in anticipation of the new consoles, but back in 2015 it was the console/semi-custom division that essentially kept AMD alive, looking at their balance sheet.
As to when the new AMD graphics chips will launch, all Dr. Su has said is that they are on track to launch by the end of the year. As to whether the low and mid-range chips may be announced and perhaps launched sooner (say late September) is still a good question. Of course, other than maybe as a dedicated monitor GPU to free up your Nvidia card 100% for rendering, most people around here simply won't care about those.
@outrider42
I've heard of pre-emptive strikes before in this context, but not primitive strikes... just had to tease ya!
Yeh, they do.
But as users/readers it is up to us (yes we do have a responsibility) to think about what we're reading.
In other news, Intel made a lot of noise at their recent architecture day....
https://wccftech.com/intel-42-tfops-xe-gpu-benchmark/
https://www.anandtech.com/show/15973/the-intel-xelp-gpu-architecture-deep-dive-building-up-from-the-bottom
https://wccftech.com/intel-tiger-lake-cpu-superfin-architecture-deep-dive/
https://www.anandtech.com/show/15971/intels-11th-gen-core-tiger-lake-soc-detailed-superfin-willow-cove-and-xelp
If Xe ends up going anywhere, it looks like we'll have even more GPUs in the market that don't like Iray... maybe that'll help drop Nvidia prices at least.
Intel's been faltering lately, even though they are still eeking out that last bit of performance from 14nm, but at some point the giant WILL regain it's balance. If their architecture day presentations hold true, 2021 should shape up to be an even more interesting year.
Until then, of course, we get to watch Nvidia and AMD trade blows on the GPU front...
Not seeing anything new in the news feed for Nvidia (and AMD) atm, so carry on!
!
I'm so sick of moving goalposts.
No, Nvidia does not care, and has never cared, about consoles matching their current gen GPU's on day of release. These consoles stay out for years. The PS4 released in 2013. They've had many years where they crushed the things compared to one where it just matched them (and it just matched their mid tier). Further at this point what matters more is market segmentation. The publishers got tired of the PC market hammering them on how badly they gimped the PC version of games not to show up the console versions so they mostly stopped releasing them on both or delay releasing them for years. Horizon Zero Dawn just released on PC 3 years after it released on PS4.
First it was Nvidia is responding to AMD now it is they are leapfrogging. But that is clearly ridiculous. AMD isn't in production. Whatever Nvidia does AMD can actually respond to. The AIB's aren't even designing the Radeon's yet. I'm guessing TSMC isn't even making the silicon yet. AMD will be the ones to add more VRAM, they always do.
Last of Us 2? You think that's something special? Have you even seen any of the Assassin's Creed or Farcry games? Same gameplay, better graphics and open worlds.
Halo infinite at 4k 120? LOL. No sorry. You have again misunderstood marketing speak. They are not saying it will be there, as they're also saying that it will do that on PC. Just that it will support HDMI 2.1. Now it may be much like other "esports" games that it will have very low demands on the system and get a very high fps, but even CS:GO which is currently about as high an FPS game as there is cannot hit 4k 120 on anything going and we know for a fact that the new Xbox is not more powerful than a 2080ti.
No, the PS5 SSD is not faster than those on PC's. It's just a standard NVME one. If it was some better standard the enterprise would be all over that stuff and we have a shit ton more money than console peasants. It's just they claim, remember this is all unverified marketing stuff, that this firmware makes the fetching of assets more efficient. No such claim about virtual RAM has been made.
I think you grossly over estimate what this console can do and grossly underestimate what PC's can currently do and what is currently on the horizon. If Sony was literally making PC's obsolete they wouldn't be selling consoles for $500 a pop they'd be selling servers for $50k each.
Really, I read that they only supported parts of the PhysX engine, not all of it, as it's quite large and a long term growth proposition. Unity uses the PhysX library.
The PS5 SSD is a bit more complex than 'just' an NVME drive, as Linus (LTT) notes in this apology when he ran his mouth on the subject without double checking his facts:
In the video above. Linus gives a quick primer as to what Sony has enhanced/optimized r.e. storage, and if you need a deeper dive Linus mentions another video you should watch, or you can just watch Sony's video.
This doesn't really relate to Nvidia graphics cards of course...
Emphasis is mine.
That's exactly what Linus at Linus Tech Tips thought as well. He later discovered that he was so wrong that he later published the best apology video I've ever seen, for violating the trust his viewers put in him, his words. Descriptions of what they've done with the architecture are all over the internet and I'm surprised a "hardware guy" like you hadn't heard about it.
Saying objectively incorrect things but with great passion and conviction does not make them any more correct.