Adding to Cart…

Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
I have no doubt that nVidia doesn't mind that impatient folk that buy 8GB/10GB models will be forced to upgrade to in many cases to 16GB/20GB models. Releasing with a 8GB/10GB instead of 16GB/20GB models also lets nVidia gauge demand much better and not get burnt with a huge stock of unsold cards that have expensive physically outsourced components used in them. It's much cheaper to have a large stock of unsold 8GB GPUs than a large stock of unsold 16GB GPUs when it's the RAM that is the most expensive cost outside the GPU itself. The RAM likely might even cost more than the GPU as I'm not privy to nVidia's cost numbers.
So are not emmisives part of iRay what customers can expect to be used in a typical DAZ product? Must be because emmissives were sold as part of that product.
You know part of being knowledgable professional is knowing that a consumer oriented product for hobbyists is aimed at customers that don't read technical archana about emissives and such things. They buy a product and use it as is because that's what sold the product to the customer. They don't buy a car and then procede to rebuild the engine in their brand new car.
That product was designed that way. That is why DAZ 3D and those PAs should expect that a product they design and sell winds up getting used exactly in a manner as they designed it to be used. You seem to think it's Joe Average's fault for not designing the product themselves to avoid those emmisives.
Fact is that's a perfectly average and legitimate scene that DAZ 3D, game engine companies, 3D companies and video GPU designers and PC designers can expect average hobby consumers to construct using COTs products they are selling to us. They came to us with their advertising we didn't go to them and they said we can do those things so count on it: we customers expect that hardware, software, and digital models to be able to do those things. They need to up their game. The era of constructing 3D scenes like PC users used to load programs in 512K of RAM in DOS days are over. They need to get on the stick!
For once I agree with kenshaw on something, LOL. People don't have to be very tech savvy to feel screwed, and they don't have to be launch window buyers. Just seeing Nvidia release a new set of GPUs weeks after that suddenly doubled VRAM would be huge red flags for many customers. Nvidia announced the 3070, and that product isn't coming until October. Yet they could not bother to tell us that double VRAM capacities would be coming in November?
That just doesn't work. And maybe if AMD blows the doors off Nvidia that force their hand, but again, That still would point to very poor planning. Rumors about AMD being powerful and having 16gb have existed for months. A leaked AMD benchmark dates all the way back to early January of this year. Early January! That is an eternity in tech time. And yes, these companies absolutely do watch each other very closely, books have been written about the things these companies do. Why is this a surprise?
We have some new information with a leaked Chinese review of the 3090. It is not very spectacular. In the games they tested, it was only 10% faster than the 3080, and some games it was only 5%. If this is indeed true, it places all talk of a 3080 Super or 3080ti on hold. How on earth could a Super or ti exist if there is only 10% difference between the 3080 and 3090? The only thing they could do is add the extra VRAM and maybe make the VRAM faster, since GDDR6X can hit 21 per second. The current 3080 is rated for 19. That would add a little performance to the card, but not much.
But this review also casts doubt on 20gb models because the 3090 is so expensive for so little gain. A 3080 with 20gb would make the 3090 pretty much irrelevant since it is so close in performance. It could be possible to overclock a 3080 to hit 3090 performance. The 3090 would need Titan features to justify its price over a 20gb 3080. But right now, it doesn't look like it has any Titan features. The only feature over the 3080 it has is SLI Nvlink.
So now releasing a 20gb 3080 so soon would really sour those who bought a 3090, not just 3080 buyers.
Obviously the tech industry is always moving forward at a rapid rate. But this would be too fast and burn a whole of people. The effect would hit catch up to them. That is why these extra VRAM models launch 2021. That gives them time.
Jenson Huang made a pretty strong statement during his Ampere announcement. He said rather emphatically "To all my Pascal friends, it's safe to upgrade now". What curious wording he chose. He was pitching Ampere not just to those who bought Turing, but to those who stuck to Pascal. This was a very telling statement, because Pascal sold a lot better than Turing. Various Pascal cards dominate the Steam survey. Nvidia really wants to get these people to upgrade. But this group of people is choosey. They skipped Turing because they didn't like the value. So Huang was trying to make his pitch to them. Now just imagine after telling Pascal owners it is "safe to upgrade" that they double VRAM 8 weeks later. People who have had their cards for as long as 4 years were specifically told it was safe to upgrade.
Boy that wouldn't go over so well. This statement would get mocked. It might even become immortalized...as a meme.
That would be bad for business.
Wouldn't really have to be 8 weeks later. First quarter 2021 why not.
Fact is you spend time on the forums. People talk about curved emissive surfaces being an issue here a lot, that's where I first read about it and tested it for myself. Also you had a render drop to CPU and didn't try to optimize it. Why not? Seems pretty clear to me that the only unusual thing in the scene is the emissives so they'd be the thing I'd start looking at first.
Perhaps it is a matter of context. I have also read about emissive spheres, etc., on the forum but usually in the context of rendering speed, not VRAM. As I understand it, the sphere has lots of small polygons and each of these are emissive compared to a flat plane which can be a single polygon. So IRay has to calculate the light distribution from each of those little polygons instead of just the one. I have never thought of a couple of hundred polygons as being a huge hog of VRAM however but I am open to being educated otherwise.
The FE card cooling systems are I believe $150-180; how accurate this is I have no idea, but it would explain why I've also 'heard' that Nvidia doesn't make much from them.
I certainly wouldn't use spheres as an emissive.
If I want a sphere I put a light inside and make it glass or partially alphaed. The couple of times I tried that method seemed to be faster (in Blender).
Spheres by their very natures have lots more geometry than other primitives. Now add in all that light. Each light source may be very light weight but thousands of them adds up. I also have to assume there is some optimization going on as well. If a bunch of emitters are all very close together emitting in the same direction at the same angle that's pretty easy to optimize. But when they're all emitting at slightly different angles? That means calculating each one individually. And the results of all that does consume VRAM.
But run the test yourself. Create a bunch of curved surfaces, Spheres and whatever else you want. Make them emissive one at a time and render the scene. Eventually it will drop to cpu. Or get nonesuch00's scene turn off the emitters and render the scene, I'm sure it will render on an 8Gb card.
You can also use a point light with a non-point shape, rather than an emissive sphere. I haven't tessted the esults, at least with a view to comparing speed and memory use, but I think it might be an improvement
You do have a tendency to be condescending, don't you? All I was saying was that emissive spheres have been usually discussed in terms of render speed. You didn't have to lecture me by repeating what I had just said in your own words. I said:
And you said, as though giving new information:
Interesting. Maybe the Supers and Tis will not support NVLink? Honestly, that's why I want it so badly: I see it as having 48 gigs, not 24...
Only the 3090 of the current cards has NVLink. I'd be pretty surprised if any of the Super cards have it.
It looks like Nvidia is trying to kill off Nvlink and SLI, at least in the consumer space. The 3080 and the 3090 use the exact same chip, so that Nvidia took the extra step of designing a board just for the 3080 that has no Nvlink connection says a lot. They could have just used the same board with different memory configs and controllers, but they didn't want to.
Clearly this is to keep segmentation from Quadro series, which always have this feature. As VRAM counts rise in gaming cards, Nvidia doesn't want to undercut Quadro that easily. So if you want 48gb of VRAM you can only get Quadro or 3090, or Titan RTX.
SLI steadily lost support in games over the last five or so years and even before then support was far from complete even in AAA titles. It was a lot of money to spend for it to only work in a third of the games you owned, if you were lucky.
With almost every game running at acceptable fps on mid tier HW there just isn't much demand for SLI anymore. NVLink really is a pro level thing and it makes some sense it is only on the pro and prosumer cards. It will suck for the DS users though.
The rumoured "supers" likely (maybe) to hit the market next year with more VRAM might make multi-card memory pooling not so important, given that it's more efficient (and far less expensive) than sharing memory between two high-end cards. If the have (again rumoured) 20GB of VRAM wouldn't that be anough for virtually all DS users?
Linus tested the 3080 with the updated Crysis (beta, remastered version). It's still struggling a bit on the FPS...
Of course, until there's an official release version of the new Crysis, we have no way of knowing what driver improvements may bring, but it's nice to know that there's still a game out there that's going to give the latest cards a run for their money.
As to whether SLI would help in this situation if it was available, I have no idea...
I wouldnt read too much into the 3080 struggling on crysis remastered. The highest setting in that game is literally just for trolling hardware, like the original game was. Basically, that setting goes well beyond what the highest settings in any other game are. There are a couple of articles floating around that explain the idea behind that.
for perspective, on my 2080ti, the second highest settings gives generally triple the framerate as enabling that highest one does.
I would agree.
It would be an additional slap in the face to those who bought a normal card.
The setting is literally called "Can it run Crysis" I got 12 fps with my 2070 at 4k which was pretty funny as that's about what I got with my 8800GT's in SLI back in 2008 on the original Crysis. Turned down it is very playable and looks amazing.
The thing is VRAM isn't the issue for any current game, except maybe Flight Sim 2020. So Nvidia will be marketing these cards to creators and the clueless. Creators who don't have the budget for Quadros is a real market no doubt but how big is it? Keep in mind Nvidia just spent a fair bit of time telling consumers 10Gb is all they need. So if next spring they start saying here's a 16 and 20Gb card they'll have some questions to answer. If they say the cards aren't intended for gamers the AIB's will lose their shit. The AIB's whole marketing is aimed at gamers. So I'm just not sure how these cards fit into the product stack or get marketed. I accept Nvidia has created them and intends to release them at some point, maybe, but they just don't make sense.
Haha
I actually saw a Buy Now button on Nvidia's site; it lasted less than a second
248 viewing, went up to 265 and then down to 248 and occasionally lower.
There are 5 sites available via Nvidia's, and the price of the card (the same card) ranges from 649.99 to 799.99; all are out of stock.
Just ordered two MSI RTX 3090 Ventus cards, will arrive on October 20th. Here's hoping I can fit them on my PC, they're apparently 2.7 slots compared to my current 2.2 slot 2080 Ti's. And I also hope DAZ has an update by then that can utilize the 3xxx series. Will post benchmarks on the other thread when I do.
Care to donate one of your 2080ti's to a poor mpoverished Thekd? :P
Heh, I do wonder how much money and time it would cost to ship them from Eastern Europe.
I do plan on donating one to my little newphew though. Trying to get him to learn Blender, and it'll help him massively over his 1060.
Ah blender, good cause to donate to. Good luck to your nephew lol
...anyone who replaces a 16 GB RTX Quadro 5000 with a 3090 I'd be glad to take the old card off your hands. ;-)
I'd actually considered it, then decided to go for a Titan RTX; the 3090 is almost a steal at a 1000 less - almost!
thats 700 watts just for the GPUs, that will be like running a space heater under your desk. Its bad enough for me here in the South with one lousy RTX 2060 cranking in the bedroom for an all night render. And that's with central air at 73 degrees. I would have to have a separate wall unit air conditioner in the bedroom if I had 2 x 3090s.