Can someone provide an example of 11gb image downgraded to

in The Commons
Can anyone show me the before and after of an image downgraded from 11gb to below 8gb? I'm trying to decide between 8 and 11 vram cards. Based on a quick search here the 8gb cards give you better rendering time per dollar spent but you run the risk of having to potentially cut back on quality or at least additional time trying to downgrade with minimal degradation. I'd love to see the real difference between the two to better determine if the increased rendering time and/or price would be worthwhile.
Comments
Nothing to do with image size rendered, but how much space the card has before a scene drops to CPU
The rendered image looks the same regardless of card or CPU.
If you're seeing a difference, then someone may have reduced texture sizes to fit it on the card: that is a different problem.
Image quality generally depends on convergence. How you attain that covergence doesn't matter, nor does how long it takes. However, you can include higher resolution textures and lighting and denser geometry in the same scene rendered on an 11 Gb card vs on an 8 Gb card without dropping to CPU. There are scenes you would need to optimize to fit onto an 8 Gb card you might not need to if using an 11 Gb card. So that would affect the quality of your result.
You can test this difference yourself, regardless of your current hardware. Within a generation, as a rule, the more memory your GPU has, the more powerful it will also be, and the more efficiently you'll be able to spend your time. Generally, the decision depends mostly on how committed you are to exploring 3D as an art form or as a means to earn a living. That's not a question anyone but you can answer.
Just to give a rough rule of thumb, each G8F in the scene will require 2Gb of VRAM. So with an 8Gb card, four figures with a simple scene, or 3 figures and a more complex background scene is possible. If that represents what you do, an 8Gb card would be OK.
There are hair models that have HUGE loads on the card, such that on my 6Gb card I can only have 1 figure + a simple scene when using the 'Capri' G8F hair and still fit it in GPU. More figures & it drops to CPU & the render takes much more time.
I have an 11Gb and 8Gb card. Getting bigger scenes to fit on the 8Gb card is usually pretty simple, I run Scene Optimizer and reduce all the 4k and larger maps to 2k unless the character or prop is very close to the camera. I've run scenes rendered with and without scene optimizer used this way through image comparison software and either no difference was found or only a few pixels diverged.
Kenshaw (or anyone else with the same situation) do you find that you often just optimize to get down to 8gb? Or do you sometimes run it at 11gb with just the one card? If the end result is nearly identical then is the optimizing time worth the rendering speed increase?
I would say I'm mostly considering a used 1080ti or a new 2070. Both look to render at about the same speed so the trade off is the risk of used hardware vs not having to spend as much time optimizing. I am also planning on investing in a next gen gpu after the launch (which is why I'm not splurging for a 2080ti now) so I see some advantage to the 1080ti teaming up with a new (11 or higher vram) gpu later in the year.
Severin, I apologize but I don't understand the concept of convergence. Can you explain to a noob please? Also are you saying that there is a noticable difference between the image at 11gb vs the image after optimization at 8gb?
Iray doesn't work by rendering each pixel in turn and then moving on to the enxt until it gets to the end, rather it uses a method called path tracing where it fires a line-of-sight until iit reaches a final colour. As a result the image builds up patchily, which is why it looks lke a very grainy photo tos tart with and clears as it goes - convergence is an estimate of how close any given pixel is to its "final" colour, and one of the conditions for ending a render is that X% of the pixels are deemed to be converged. Other criiteria are the number of paths traced and the time taken - Iray stops as soon as any one limit is reached. As a result it's possible that the GPU render will reach the covnergence limit and stop while the CPU render will hit the default two-hour timeout and stop while elss converged, and so grainier. The stop values can be adjusted in render Settings, or the quality option can be turned off and Iray will just run until manually stopped.
It depends. I do almost all my renders overnight using RenderQueue. so I often don't even know a scene was too big for both cards until I check how long it took to render. If I have to redo a render due to some issue, not uncommon for my workflow, I will then optimize to try and get it on both cards. That gets more renders done each night and therefore gets my VN's done faster.
I have the cards in question. I wouldn't buy a 2070 right now. I'd get the 2070 Super at roughly the same price but 2080 level performance. As to pairing a used 1080ti with whatever the next gen flagship is when it launches, You might have good results but a 4 year old card is a gamble. I certainly wouldn't buy the 2080ti right now.
Also keep in mind Daz says the newest DS release allows VRAM pooling over NVLink so getting an old flagship and a new one might be very inefficient. A pair of 2070 Supers, or the equivalent next gen cards, would give you more than 11Gb and more CUDA than a single 2080ti for roughly the price of one 2080ti.
Well vram pooling would make a huge difference. Would all cards be able to do this? I can't find anything recent that talks about vram pooling in Daz (not sure what DS is as I'm brand new to this). A 2070 super costs about 25% more than the equivalent 2070 in Canada. 2060 super is about equivalent price to the 2070. If vram pooling will be a reality in daz soon I will certainly not touch a used 1080ti. Is there an expected/estimated date for this change?
DS is Daz Studio. The most recent release, 4.12, includes NVLink memory pooling according to the release notes
https://www.daz3d.com/forums/discussion/341001/daz-studio-pro-4-12-highlights
No, only cards with the NVLink connector could do this, the 2070 Super and up plus the RTX Quadros.
to clarify even further DS is Daz Studio, which is the program you are probably using Daz or Daz 3d is the name of the Company
But...just to point this out, you will find a lot of people who call the program just "Daz" for short regardless if its wrong or right. You will also find people who try to differentiate them by calling the program "Studio".
Its like the guy who created the GIF format one day saying its pronounced 'JIFF", but most people call it like they see it with a hard G. You can't control how people will use language.
The VRAM thing is not about quality, it is about capacity. It is certainly possible that it could effect quality under certain circumstances, but it all depends on what you wish to achieve with Iray. Consider that only 2 mass market consumer GPUs offer 11GB or more, the overwhelming majority of users here and everywhere else are using GPUs with 8GB or less. So everyone has their limitations.
You can make significant alterations to assets in your scenes to drop memory use and still maintain quality in the final image. But again, it depends on what you want to do. In general though, you just have to think logically about what the camera can see, and how far away it is. For any image with multiple people, it stands to reason that in order to have multiple people they are probably not up close to camera, or at least most of them. So for anyone not up close, you can compress/downsize the textures they use and the detail would be the same in the final render. This is especially true if you use any DOF.
Just doing this would save a lot of VRAM. But there is a lot more. You can reduce the mesh resolution as well.
Personally, I don't think right now is a good time to be buying anything. We have new hardware launching just a few months from now. While Iray does not use AMD GPUs, AMD is set to release a line of both CPUs and GPUs this fall. While no date is confirmed, AMD has confirmed they will launch before the new consoles do. Nvidia is also expected to release new GPUs as well, but we have no confirmed information. I believe they will, but even if they do not, I expect the new AMD lineup to shake up the pricing of GPUs quite a bit, both new and used. If people rush to upgrade, then the used market will be flooded causing used prices to drop. And if Nvidia does release Ampere, which I expect them to, then prices on those cards will have to be competitive.
Basically I am saying your dollar will buy you more if you can wait a couple months. The best case you can buy a new Nvidia 3000 series GPU at the price you are looking at now and it will be much faster than the 2070 or 1080ti. If not, then AMD's cards should shift prices of current cards down making your options better.
Iray can use any combination of GPUs that are Iray capable, they can be totally different. But for Nvlink they would need to be the same and support the Nvlink bridge. Only specific models will support Nvlink at all, and you might guess its the higher end ones that do. The lowest one that does is the 2070 Super, and specifically the Super. The original 2070 does not. The 2070 Super uses the same board that the 2080 does, which is why it has the Nvlink. However, we have only seen a 2080ti Nvlink paired up. I have not seen any tests of 2070 Supers using Nvlink with Daz Iray. If it does work, then yes, that would be pretty great.
Also note that Nvlink only shares texture data as of right now. So no you do not double the VRAM capacity with Nvlink. Some data is still duplicated. But texture data can take up quite a bit of memory, so that is still a big deal.