Adding to Cart…

Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
I have the disabled card plugged in because I don't want to power down and open my computer. It'll probably annoy tech people here to know that I also have the Quadro RTX 8000 plugged into the slower PCIE slot because I misunderstood which was which. But it works for me.
My computer is actually faster than I ever need it to be, so I haven't taken the time to optimize it, but I probably should.
Its interesting to have learned there's no point to a "monitor card", so thanks for that. I'll probably just keep that disabled one in there for the foreseeable future because I don't like opening it up lol.
I disagree with his statement about there being no point. At least in the most literal sense you could interpret it.
For many people there may not be any point. Not everyone's use-case is the same. Therefor, for some people, there may indeed be a good reason to run a card specificaly to drive displays.
For someone who isnt doing any major multitasking, sure, no reason not to use all available GPU's to power a render. But if i wanted to start rendering an animation, and while that is chugging away, play a game. If all available GPUs are running the render, at best its going to slow the render down. At worst, something is going to crash. Most likely, everything will keep running but the game will run horribly.
I can think of a few other cases where the result would be similar. Could be done, just not the best experience.
Well the reviews for the 3080 are coming in and so far it seems to be good..
Reviews are starting to go up.
Others are coming available as you'll likely notice if you open this up on Youtube
Edit
I'm not that impressed, but there is a definite and reasonable improvement. Considering what was said about the two RTX games, I suspect that for rendering the upgrade might be worth having, even for those with a 2080ti.
There were no Blender comparisons, so off to see what other reviews offer.
Yes, reading the reviews & specs I think I might save money until the multi-chiplet GPUs are ready in a couple of years & buy a GeForce RTX 3060 TI with 8 GB GDDR6 RAM. It looks like nVidia did this expressely to get folk clinging to their 10XX series nVidia GPUs to upgrade for less than $500.
I think this is the most balanced review; there are others with more raw data, which is useful for sure.
For you old [H]ard-OCP fans her is Brent Justice's review from The FPS Review website
https://www.thefpsreview.com/2020/09/16/nvidia-geforce-rtx-3080-founders-edition-review/
OMG! Best Buy already has the MSI RTX 3080 ion sale for $749.00? They are already starting to price gouge.
Well the RRP is only valid if it is possible to buy it on a regular basis.
... We'll see how often (or how rarely) it turns out be regularly available.
All/most AIB cards will not sell at the FE price. Checking around it looks like either $739 or 749 is the MSRP of the Ventus.
This does not look like price gouging. the ones selling for over $1000 on Newegg and Amazon are price gouging.
3rd party prices are not surprising. Nvidia is able to control their costs better, after all, these are Nvidia designed chips. Obviously they sell them to 3rd parties for a profit. Then the AIBs need to design everything else, cooler and all. So yeah, look for typical prices to be anywhere from $50 to $100 more. Cards with more exotic cooling will be more than that.
But $1000? yeah, that's some gouging there.
It doesn't matter if you have a low priced card if it's rarely in stock; I'll guess we'll see how often that occurs.
If I'm at all representative, the rumours of a double VRAM version (3070 and 3080) might be creating a market in waiting. Or it could be that gamers don't care much about VRAM.
Can confirm that the latest Windows major update does indeed open up significantly more video memory to CUDA apps like Iray on my Titan RTX than seen previously.. Prior to updating, I was typically seeing something like this:
Just popped open my log file, and I am now seeing this:
Which - while not exactly matching the 900MB figure you mention - is in the ballpark of of that amount and the observed amount on your 1080Ti (11GB per GPU) based system. This despite the fact that the Titan RTX is a more than double the VRAM capacity card. Backing up the claim that the WDDM VRAM usage penalty is no longer proportional.
Furthermore, despite claims made by some the presence and - almost more importantly - configuration of displays physically connected to a specific GPU do indeed effect the amount of usable VRAM (rather than just theoretically available - which is what the Daz/Iray log file excerpts quoted above refer to) for user-initiated apps like Iray. You can demonstrate this for yourself by doing the following:
What you will find is that the exact amount of VRAM used on that GPU changes with different display settings. This is because things like the WDDM framebuffer needed for each connected display exists solely in the VRAM of the GPU it is physically connected to. Meaning that there is sound logic (especially on system setups utilizing one or more high resolution 4K+ displays) to having "dedicated" display GPUs in a system where practical (rather than just theoretical) VRAM availiability is a priority.
There's been some grumbling about the drop from 11Gb to 10. However a lot of the knowledgeable tech reviewers have been pointing out that no game currently on the market uses 10Gb, I think the current highest consumption game is Red Dead 2 which at 4k with all the texture settings maxed uses about 6. But that will change if/when Cyberpunk launches. But even that doesn't say it will need more than 10 at max settings.
So the only people who really want more than 10 are creators, more VRAM is great for video editing and other tasks beyond iRay, and less well informed consumers who just want big numbers.
A lot of people seem to think Nvidia will do another launch before Xmas but I think that's unlikely. There's just no time and likely no chips. AMD's big Navi announcement will be late in Oct. so even if there is something that Nvidia wants to respond to it would be hard to get a new product on shelves in time for the holiday unless they just have it waiting to go on spec.
Yes, i am of the mind that they already have these other cards, maybe not sitting there waiting to go on shelves, but developed and ready for mass production. Ready to respond to whatever AMD brings to the table, so they can press that button if AMD come out swinging, or sit back and soak up some revenue and release at a much later time if AMD cannot compete. It would be the smart thing to do, and given Nvidias growth over the past few years, i think they have definitely been in the habit of making smart decisions
Nvidia's decision to drop VRAM capacities ties directly into its decision to launch GPUDirect Storage (effectively a consumer-oriented spin on its decade-in-the-making enterprise level GPUDirect featureset). Assuming that the tech takes off (which it almost certainly will imo, given that all the new consoles are already set to use the same memory/storage pipelining) there should be no reason for anyone to even need significantly larger VRAM capacity consumer-oriented GPUs (unless - of course - they are attempting to run not-yet-GPUDirect Storage optimized professional apps like Iray on them...) Which is why I am personally not expecting to see eg. any 16GB/20GB 3070/3080 variants. Most likely ever.
Maybe. I'll see that tech finally when the A100 rack gets delivered next year. But that is not what is in the Playstation. The Playstation is not doing anything like that. It's off the shelf HW. With a custom firmware to do what seems to boil down to some sort of HW level compression and prefetch. But until the thing is actually released no one is going to be sure because the explanations have been so muddled.
I usually buy my new GPU as soon as possible, so that I get the most time with it, but this time around I'm not so sure. 3090 sure looks nice, but that is probably going to be near 2000 €, and I'd probably need a new PSU also. 3080 looks really nice too, but I'm not so thrilled to decrease my GPU memory....even 1 Gb. I think this time I'll wait for the AMD Big Navi launch, and hope team red comes with a big surprise, which hopefully forces Nvidia to release 3080ti/super or something early with more memory. If AMD can't deliver, my 2080ti is still a good card for rendering and I might just skip this generation.
No, they are not. They allow you to use your computer for other tasks while your render is going without having to sacrifice either render speed or Windows responce time. You don't need a monster card for your display, I've used 1060s and 1070s at 4K for the task and they work just fine.
I got seriolsy burned when the 10x0 line came out and neither MacOS (which I was using at the time) nor Studio (which I wasn't) or Octane (which I tried to learn) could use either for almost a year, forcing me to switch back to Windows. Then the 20x0 cards came out and Studio couldn't use those for a time (a lot less than the 10x0 update, but still). So, I wait until the software I need them for has been demonstrated to work. I hated having that 1080ti sitting on a shelf for a year.
Still wouldn't make any sense to let the display one disabled IMO since when you exceed the VRAM of one card, Iray will render only on the second one. So you might as well have both enabled and let Iray do its thing. Of course this would be such an edge case to begin with, where let's say the scene exceeds the 11GB minus what little the display needs of the above mentioned 1080 TI, but not whatever other card with similiar VRAM. Or if your second card has massively more than you probably never need to worry about what little is reserved for the displays anyway.
Some dirt cheap card for the displays? Maybe, I don't know it still doesn't sound like having a second card just for that makes much sense. You won't have the speed of your fast GPU for anything else that's GPU accelerated. Hassle of a second video card also, just for a tiny bit more VRAM while rendering?
To each their own I guess. Not for me, but then again I game as well and need a fast GPU in Mari, Photoshop etc.
Some people just cannot be dissuaded from their myths.
Indeed
Not forgetting the problems that can be expected with having the drivers for the "dirt cheap" card in the system...
Also the cards like the ASUS TUF Gaming is faster and quieter out of the box than the NVIDIA FE.
Not familiar with the site but Amazon has third party sellers playing games with prices and terms apparently.