Adding to Cart…

Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
Watching this video is interesting it is pretty much all speculation, but it may all be up to what AMD release to decide if Nvidia release better cards sooner than later or something like that..
...my question to the above is how do they do the current 16 GB RTX Quadro 5000 and 24 GB RTX Titan which are both dual slot cards without 2 GB memory chips?.
There are 2 Gb and 4Gb VRAM chips. How do you think they fit 48 Gb on the RTX Quadro 8000 which has 48Gb and is still a 2 slot card?
I didn't watch the above, he was annoying, did he imply there wasn't?
I'll just point out that Nvidia's Quadro growth has slowed tremendously the past couple years. Their most recent financial report states this concerning their professional market which concerns Quadro:
Meanwhile the gaming lineup did $1.65 billion, and the data center did $1.75 billion in this time frame and both were way up. And this is with Turing on its last legs. Gaming is still very much their main thing. Data center only recently surpassed it thanks to acquisitions like Mellanox. Now when you consider that gaming cards are just a fraction the cost of Quadro, but their sales did 8 times more revenue, that should give an idea of how many units are getting sold.
Their growth has come from gaming and data center. Back in 2018 Quadro growth was already flat, so this has been a trend. Perhaps this could be another reason why the gaming lineup of Ampere is launching before Quadro. In contrast, the RTX Quadros launched months before Turing did.
The 3080 and 3090 are using GDDR6X, which is a new memory. It is so new that 2GB chips do not exist yet. But for last gen, they used regular GDDR6, and these chips were available in different sizes for the Quadros and Titan.
...the catch word is "yet".
So that would mean were they to release them with double the memory using 1 GB chips those "Ti"/"Super" cards along with the Titan and Quadros will all need to be triple slot in width as well to accommodate the memory and cooling for it on both sides.
I'm not sure how they segment the sales of Quadros. But I am fairly confident that that data center segment is a lot more than Mellanox. We have a lot of infiniband and ethernet adapters from them but IIRC a 1U infiniband switch only runs a couple of thousnad dollars.
Nvidia may not be selling a lot of Quadros as workstation cards but they are selling lots into the datacenter.
I have a 2019 gamer model HP Pavilion that is absolutely great quality. Same with the HP 8460P Elitebook that I fried doing DAZ renders and HP 8470P Elitebook that I didn't fry doing DAZ renders. I'm going to convert the HP8470P to a FreeBSD laptop. I guess I'll toss the HP8460P because to replace the MB in it is more than it's work otherwise it'd make a great Ubuntu laptop. So I can recommend HP / Compaq hardware but it cost noticably more quite often.
@billyben_0077a25354
fair enough.
Yeh, many folks seem to be ignoring this or not noticing, or not caring because... No clue.
I love the graphs showing TFlops. They're about as trustworthy as a double-glazing sales man (in the UK at least their reputation has not been the best over a few decades).
Well the 3090 is the new Titan according to Nvidia, but well marketing.
Hi Guys,
Currently running a pair of watercooled GTX1080Ti GPUs with a Core i7 7820X and 32GB RAM. Originally looked at a RTX3090 but it's a lot of money and do I really need 24GB VRAM...
So now it'm thinking about a RTX3080, I know it's not a big jump in cuda cores but the RTX card efficiencies should still give my rendering a boost and...
...maybe spending the money I've saved on a new AMD motherboard and Ryzen 9 3900X. Could even reuse a GTX1080Ti to drive the display...
What does the forum think ?
Steve
1080ti for your desplays with a dedicated render card is, imo, the way to go.
And because I'm not a gamer it wouldn't matter if the 1080Ti was in a x8 slot...
Although looking at the AMD motherboards it would appear they can only drive one x16 slot at x16, the other x16 slot is driven at x4 unless you go to a threadripper TR4 motherboard but then the cheapest threadripper CPU is £1200...
*bangs head against wall*
dedicated display cards are POINTLESS!
Windows always reserves VRAM on every consumer grade video card as if it hard a monitor connected. It may even actually have the card try to output a signal. That is less clear. But there is clearly nothing to be gained by not having every Nvidia card installed selected for use in rendering.
This myth that there is some benefit to having dedicated cards needs to die. People are wasting money and resources on it.
What if you have one card that is doing DAZ rendering and the other one you have excluded from rendering in DAZ... wouldn't that be ideal for actually being able to use the machine while it's rendering?
I'm not trying to agitate here, it's just what I've always been led to believe. Honestly, I have two cards but the GTX 1660 is disabled in BIOS because I couldnt' get the Quadro to run correctly (driver conflicts) when I was using it, so I just decided to use the one for everything, but in the situation above I'd think you'd see a benefit from running a dedicated "display card".
Also not trying to stir the pot, but would you consider Titans "consumer grade"? If the 3090 ends up having TCC mode capability, would that not invalidate your statement?
I use a Threadripper, but I'd put the render card in a x8 if I didn't have x16 available; I believe it doesn't make a difference over the actual render times, but only the initial data transfer; I could however be remembering wrong.
I disagree, as whilst some RAM is reserived, it isn't used for anything but rendering. Putting it in capitals doesn't make your emphasis any more correct, or even correct at all.
It remains at idle, becuase there are no monitors are plugged into it.
... And stop banging your head against the wall, it must be giving you concussion.
Kenshaw's point about memory is well supported - we went through this over the amount of memory Windows 10 reserves, though there are other factors there. What isn't clear is the use of resoruces for any kind of acceleration (OpenGL, OpenCL, etc.) - we know that Iray doesn't automatically use all cards, it may well be that if one or more applications are actually doing more than sending raw data to the viewports then that is going to be specific to the card(s) used for their display (or whatever other card is set in their preferences, if appropriate).
Man I miss Windows 7
I am looking for a single slot Radeon RX 550 or better to put in my PCIe x16 v 2.0 (I think) slot for when I get an nVidia 30XX but modern (RX 450 or greater) single slot cards with 4GB RAM or more from AMD are hard to find and given the state of the newest GPUs they are extremely overpriced. In fact I've only found a couple and none more than 4GB RAM.
If the card is disabled why is it plugged in? And again no. There is no benefit when all the cards are consumer grade. NONE.
If you have a pro card or prosumer card that can be put in TCC mode and have the know how to do so then there is a benefit.
I have 2 cards. I use both for rendering. I use my computer while rendering without issue. I think the people who have issues with using their systems while rendering are either using the CPU, which always pegs the CPU,or have very low spec GPU's (which likely means they fail over to CPU). That's the reason WDDM reserves VRAM in the first place.
While you render on GPU in Iray, other GPU accelerated applications might be slower. Browsers use some GPU acceleration these days, though I never noticed any slown down. Nor would I care much. Obviously games would be slower while you render. but who does that? I use Mari which lags noticably during rendering, as it's a very GPU-heavy application. I tend not to do both at the same time, and it would definitely not be worth it to have a separate videocard for it. Waste of processing power, when I could just render faster on both GPUs.
Overall it's absolutely not worth it to have a separate videocard for driving the displays, even less when you wanna use a 1080TI. I mean how decadent is it to use a powerful card like that with 11GB VRAM to just drive a display and nothing more?
Fortunately nobody at home is forcing one to change
At work I managed to fight back for two years, but at least I got the third monitor to lessen the pain of W10+Office365...
At least somewhat related to the discussion about multiple GPU:s, the further you stay away from integrated graphics, the better your system works.
Well, yes - and no. I have a GTX 980ti, a 1080ti, - and a GT740. The 740 drives my monitors (2 at 1920 X 1080) and is pathetic for Iray. It was also the only video card n the box when I started, before Iray. In the first couple of iterations of Iray pretty well ate the GPU, to the point that Win 7 solitaire was unplayable due to screen lag. That was resolved quite a while back, and I plan to add a 3080 to the mix; if Win 7 can find the monitors on the 740 when it relocates to slot 7 I'll keep it, otherwise the 980ti will inherit the monitors.- but not as a dedicated card.
So, the 3080 performs about as fast as the discontinued Radeon VII in mining apparently...
https://wccftech.com/nvidia-geforce-rtx-3080-ethereum-crypto-mining-performance-leaks-out/
I didn't dispute the RAM, indeed I acknowledged it.
I have no control over what the system does with the cards, other than not plugging in monitors; the not plugging in monitors, however, can certainly be significant.#
When rendering, I only use the dedicated card (and the CPU which in Blender contributes more). Continued checking of HWINFo64, GPUz show the render card at 0% useage, which can't be considered conclusive as I have no idea how accurate they are at reporting every possible use, and I don't monitor 24/7, but outside useage is certainly much lower than if I also used them for monitors. I see the monitor card being used for what appears to be a variety of functions, but not rendering.
That is what a dedicated card is, why some seem to feel it isn't valid, I have no idea. Rendering doesn't affect viewport display by any discernable amount. They are as seperate as I can make them.
... ergo: it's a dedicated card.
...that's the setup I have on the render system.
...that's why I stayed with it and beefed up system security. Windows 7's "footprint" on VRAM in almost negligible in comparison to 10 and that is important when you create epic level scenes like I tend to do.
..ugh are the froum servers are slow today. I actually got a popup while posting an earlier comment that asked if I wanted to "remain on the site" or "leave" similar to FB.
Hardware vendors are. Newer MOBO and such are not making win7 drivers anymore. Had to switch to 10, had I known that, I would have bought an older mobo, just a higher tier or something. Them is the breaks I guess. Ripped out as much of the crap I don't need as I could, no telemetry or auto updates, or that stupid AI lady or whatever she is lol. Not nearly as slim as I got win7, but it has to do. Now that I am transferring over to a blender pipelin, spending less and less on the windows partition, and more on the linux xfc one. It is slim as helllllllll, even compared to win7. Still need win for a lot of my graphics programs unfortunately.
Because you are credibly contradicting him. He doesn't like it when people do not just accept whatever he writes because of all the caps and exclamation points.
Um guys....do you know that Windows 10 2004 update mostly fixes this issue?
Now Windows 10 only reserves a flat 900 MB (that is Megabytes) of VRAM REGARDLESS of VRAM capacity. It no longer takes a percentage like it used to.
On my 1080tis, they used to report having 9.1 GB available. But after updating to 2004, well, here it is straight from my Daz help file:
2020-09-15 22:28:59.187 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : CUDA device 0 (GeForce GTX 1080 Ti): compute capability 6.1, 11.000 GiB total, 10.041 GiB available
2020-09-15 22:28:59.189 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : CUDA device 1 (GeForce GTX 1080 Ti): compute capability 6.1, 11.000 GiB total, 10.039 GiB available, display attached
Boomshockalocka!
As you can see, they now report over 10GB available. So you guys do not need to freak out over whether the 3090 has TCC mode or not. The 3090 should report 23GB of available VRAM. I think this is fair. It is certainly better than it was before, and you don't have to shell out extra cash for a Titan or Quadro just to get TCC mode and a single extra GB of VRAM.
This is something that perhaps @RayDAnt could test with his Titan RTX to see how much VRAM it reports with 2004.
And this also means that you are not quite correct, kenshaw. Because while Windows does reserve VRAM, 900MB of VRAM is probably less than the display GPU is using to push the Daz app and Windows. Plus this also neglects if somebody uses the Iray preview mode in the viewport. With a dedicated display GPU they can choose to use the display GPU for Iray preview, so the rendering GPU is not burdened by Iray preview.
One last note, before some of you rush out to install 2004 if you haven't already, the update for 2004 is a big one and will take some time to install. This one was much longer than my previous updates. At one point I started to wonder if my PC was not going to reboot, there was a long period of time of like 10 minutes where the screen was totally black with no indication of anything going on. So give yourselves plenty of time for this update. But yes, this update is excellent news for anybody who uses rendering software.