Nvidia Ampere (2080 Ti, etc. replacements) and other rumors...

1246745

Comments

  • SevrinSevrin Posts: 6,310
    Diaspora said:

    So, has DAZ commented on how long we should expect before DAZ Studio iRay will support Ampere?

    Right now I'm fine with my RTX 2080 for games and DAZ is the big incentive to upgrade and if it takes them months before an RTX 3090 or 3080 ti will even work in it then I'll just hang tight in the meantime. 

    I dont think it's actually down to Daz per se, my guess is it (an updated version of Iray if needed) will have to come from Nvidia, then added into DS (after testing etc)... and quite possibly there isnt even an 'official' answer yet.  So yes, hang tight basically, let the reviews hit, then go from there :)

    You do realize that most of the entertainment value gained from computer hardware is speculating about future computer hardware.

  • RobinsonRobinson Posts: 751
    Sevrin said:

    You do realize that most of the entertainment value gained from computer hardware is speculating about future computer hardware.

    There are dozens of YouTube channels with tech talking heads keen to tell you about their "sources", as if they're George Smiley covertly gathering intelligence on future plans, or similar.  The reality is apart from a very few (one or two), they're in touch with AMD, NVIDIA, Intel, etc. marketing droids and that's who they get their information from.  They can be entertaining, especially when Adored TV gets genuine "leaks" about Intel's corporate culture, but mostly they're a complete bore.

    .

  • nonesuch00nonesuch00 Posts: 18,320

    If the 3090 will only have 12GB RAM as stated at that site (and it wasn't a copy/paste mistake or intentional misinformation) then I guess I'll buy whatever 30XX series has 8GB RAM for the best price. That flies in the face of what every other gossip site is claiming though (of course they are probably all copying the same source so if that source is wrong they are all wrong).

  • outrider42outrider42 Posts: 3,679

    On the subject of Iray support, I've said before that I believe Ampere will support Iray out of the box. Iray RTX uses OptiX 6.0, which does not need to be recompiled for a new GPU arch to function on it. If you look up the old Iray documentation, there is an explanation for why we needed to wait so long. Old Iray used OptiX Prime, and that is not quite the same as full OptiX. OptiX Prime needed to be recompiled for every new GPU architecture, and that took a long time.

    We don't use OptiX Prime anymore, so Ampere GPUs should would with just a driver update. Celebrate!

    Now I would wait for verification to be sure before running out and buying Ampere. But I do believe it will be fine.

    It is going to be quite amusing to come back to this thread in a few months. I'm reminded of how people completely pushed back on my suggestion that consumer GPUs would get Tensor cores months before Turing was announced. I remember that just like it was yesterday..."Tensor cores are only for scientific use!!!" LOL. But I was proven correct in the end.

    I will say that I believe Nvidia is doing a number of things to purposely throw people off, including (and especially) leakers. Some leaks may be just a smoke screen. Nvidia is not new to this game. Some of the leaks come from people in labs working on these cards, at the risk of their jobs. They have seen these cards. But what they see may not come to mass production. They could be prototypes, or maybe Nvidia just changes their mind. It has been so crazy that we don't really know for sure what fab Nvidia is using. Is it TSMC 7? Samsung 8? Or something else? Adored's last video was a fun one, where he makes a kind of convincing argument that it is a different node from what we've heard...while at the same time everything Nvidia stated was still technically correct. Oh, and lets not forget that Samsung has supposedly given Nvidia a great deal, on top of capacity. That is going to play a role.

    Even this 12 pin power connector could be a smoke screen with a twist. The idea of them using a new 12 pin connector has made everybody assume Ampere must be super power hungry. But what if that is not entirely true? What if the 12 pin connector is just Nvidia...being Nvidia? As wacky as it sounds, hear me out. Because otherwise, why do they really need this? Why not just use the 2 8 pins for a total of 16? That would be enough for any massive GPU. We've had 500 Watt GPUs before people, and they didn't need a special connector. So why now? I'm sure Nvidia will have an explanation why, they might say that by combining certain pins it allows for better stability or something. But in actuality, IMO it is about creating segmentation. AMD doesn't use a 12 pin. But power supply makers are going to support Nvidia because of their dominant position. This is about marketing and flexing. These new power supplies will probably have a new sticker on them with Nvidia branding...see what I am getting at here? (And once again this move is all about competition, which someone seems to think doesn't exist. Its just pure coincidence that they are both launching major next generation hardware a couple months apart.)

    Back to VRAM, Most rumors indicate AMD going with 16. If Nvidia goes with 12 and 20, then they can attack that AMD card from both sides. They can undercut it at 12gb, and they can go beyond it with 20 while charging a high premium. So they can release a 3080ti near $1000, but it only has 12gb. Then they have a 20gb version but it costs hundreds more. Haha!! Then the ball will be in AMD's court to respond to this opening attack. Nvidia is probably counting on AMD not wanting to directly compete at the exact spec, in which case AMD sticks to 16. The mind games at play are intense. That is why books are written about this stuff.

  • SevrinSevrin Posts: 6,310

    I was checking the Iray Render blog for something else, and came across this tidbit suggesting that Iray support for Ampere should not be too far off.  That was supposed to be at the end of last month, but there's been no further update about it on the blog.  Daz is currently on 2020.0.1, so I guess we'll get 2020.1.0 once it's released.

    iray 2020.1.0 beta

    (and iray 2019.1.7) released

    Main features that will be production ready with the 2020.1.0 final coming later this month:

    Support for Ampere/SM8.0/GA100 (and yes, perf numbers are looking very good so far :))

    Updated AI denoiser (it will now in addition always use the latest and greatest version that comes builtin with the NVIDIA driver, which results in 90% size reduction of the library!)

    (Optional) Use of AI based SSIM prediction to get the current level of quality without the need to have a reference image (which will then be used to predict the remaining render time later-on, see http://on-demand.gputechconf.com/siggraph/2018/video/sig1851-carsten-waechter-adaptive-rendering-powered-by-optix-ai-features.html)

    Support for Primitive Variables (PrimVars, or in general: user data)

    Support for custom postprocessing added

    Support for the new AxF 1.7 SDK (see https://www.xrite.com/axf)

    Many many fixes and improvements

    • July 06, 2020, 10:01am

    https://blog.irayrender.com/post/622877722533838849/iray-202010-beta 

  • NylonGirlNylonGirl Posts: 1,938
    Robinson said:

    Honestly, unless you're using it for actual paid work or you're a baller, the thought of spending over a grand on any graphics card seems completely ridiculous to me.

    Maybe they're trying to run the new Microsoft Flight Simulator at more than 12 frames per second.

  • DripDrip Posts: 1,206

    Will the new cards have improvements over my 2070? Oh yeah, definitely, if there's a 3070 in the lineup, I fully expect it to outperform the 2070 Super, which already looked like an interesting upgrade to me, with a rendering performance increase that could get as high as 30%. But, I didn't upgrade to the Super, so it's not just about rendering speed. Other things for me to consider will be: Will it fit on my mobo and inside my case? How much power does it require? (the 2070 draws 175 Watts, while the 2070 Super draws 215), and the most important bit: how much VRAM will it have. 8GB is quite decent, and is enough for most of my needs. But, an increase to, say, 12 GB or more could make the 3070 more future proof, as assets seem to get more complex geometry again and texture sizes are slowly getting bigger as well. It's just a matter of time before someone includes 4k nail textures on a model, without considering customers with slower rigs, or the fact that in 99.9% of the renders, one wouldn't see the difference between nails with 256 vs 4k textures. Current designers are generally concious about this, or made it a second nature to optimize textures. I mainly worry about new artists who never had to worry about memory limitations or had no need to think logically about what's necessary.

  • SevrinSevrin Posts: 6,310
    edited August 2020
    Drip said:

    Will the new cards have improvements over my 2070? Oh yeah, definitely, if there's a 3070 in the lineup, I fully expect it to outperform the 2070 Super, which already looked like an interesting upgrade to me, with a rendering performance increase that could get as high as 30%. But, I didn't upgrade to the Super, so it's not just about rendering speed. Other things for me to consider will be: Will it fit on my mobo and inside my case? How much power does it require? (the 2070 draws 175 Watts, while the 2070 Super draws 215), and the most important bit: how much VRAM will it have. 8GB is quite decent, and is enough for most of my needs. But, an increase to, say, 12 GB or more could make the 3070 more future proof, as assets seem to get more complex geometry again and texture sizes are slowly getting bigger as well. It's just a matter of time before someone includes 4k nail textures on a model, without considering customers with slower rigs, or the fact that in 99.9% of the renders, one wouldn't see the difference between nails with 256 vs 4k textures. Current designers are generally concious about this, or made it a second nature to optimize textures. I mainly worry about new artists who never had to worry about memory limitations or had no need to think logically about what's necessary.

    We've already seen some high-poly clothing items by new PAs who don't even have the discipline of retopologizing to quads.   That might become a trend.

    Post edited by Sevrin on
  • i53570ki53570k Posts: 212

    On anothe thread it shows that subD4 without using normal map might use a bit more memory than subD2 using normal but renders much faster.  I don't mind if HD clothing becomes a trend.  Using subD to optimize a scene is far easier than screwing with textures.

  • nonesuch00nonesuch00 Posts: 18,320
    i53570k said:

    On anothe thread it shows that subD4 without using normal map might use a bit more memory than subD2 using normal but renders much faster.  I don't mind if HD clothing becomes a trend.  Using subD to optimize a scene is far easier than screwing with textures.

    That's because the subD2 using normal does a lot more calculations to get to where the subD4 without normal map already is. So folk can talk all they want about wanting a faster GPU but the GPU not needing more memory in particular but the fastest calculation a GPU is ever going to make is one it doesn't have to make because the model already has the geometry created instead of the GPU doing it using normal/displacement maps. GPU with huge amount of RAM is a huge speedup.

  • kenshaw011267kenshaw011267 Posts: 3,805

    If the consumer cards do have some large bump in VRAM, I'm still betting against it, that would also suggest the Quadros would as well. There are a lot, and I do mean a lot, of RTX 6000 and RTX 8000 out there. If Ampere is really that much better at rendering, and if the DGX racks do become reasonably widely available, I finally got pricing Friday which was shockingly low ($75k for the 4x40Gb A100 system and $180k for 8x80Gb) and these are definitely on TSMC for those who were speculatingly that Ampere might not be on TSMC, there are going to be lots of Quadros on the used market. Those racks are not far off what we've been charging customers for dual Quadro systems. My boss wants one to test but if they pan out...

  • outrider42outrider42 Posts: 3,679
    edited August 2020

    The A100 is a completely different beast all together. It doesn't even have dedicated ray tracing cores that the Quadro and consumer cards will. And it is also a low volume product with a high mark up. That makes it an easy choice for Nvidia to use TSMC. A100 is basically an Amped up version of Volta and its true successor. It may as well have a different name. So it is entirely possible that the rest of the lineup is on a totally different fab.

    And why would Nvidia care about making Turing obsolete? They have already stopped production of most Turing products and stock is reducing at stores. Nvidia wants you to buy their new products, not their old products, LOL. And by pushing Ampere high enough, they can even entice those who bought RTX Quadros just last year. If your business is making money using this equipment, you will make that upgrade.

    And Nvidia has done this in the past plenty of times. Even Turing itself made Pascal obsolete for ray tracing applications over night.

    Post edited by outrider42 on
  • marblemarble Posts: 7,500
    Drip said:

    ... and the most important bit: how much VRAM will it have. 8GB is quite decent, and is enough for most of my needs. But, an increase to, say, 12 GB or more could make the 3070 more future proof, as assets seem to get more complex geometry again and texture sizes are slowly getting bigger as well. It's just a matter of time before someone includes 4k nail textures on a model, without considering customers with slower rigs, or the fact that in 99.9% of the renders, one wouldn't see the difference between nails with 256 vs 4k textures. Current designers are generally concious about this, or made it a second nature to optimize textures. I mainly worry about new artists who never had to worry about memory limitations or had no need to think logically about what's necessary.

    I just don't understand this claim at all. I have a 1070 with 8GB and I hit that limit very easily with just three G3/G8 characters and a few props - all optimised with the 4k textures reduced by half. Try to add a fourth G8 (or G3) and I'm back to CPU immediately. If the 3070 (which looks like it will once again be the one for my price range) is offered with only 8GB again I really think I will look for something else to play with in my spare time. 

    I've also seen mention that the answer is compositing but, if I understand the idea correctly, that means rendering out a static background scene and then a scene with the characters. I can't see that workflow working for me because I need to move my characters and move the camera so they are in different positions relative to the background. So I would have to render the background again for each new scene (and I tend to have 60 - 100 scenes in a project). Also, I optimise my props too so my backgrounds (usually indoor rooms with shaders rather than texture maps where possible) take up a small percentage of the total VRAM usage. Human skin realism takes lots of maps and they are mostly 4k maps which is why the Scene Optimiser is my most-used utility.

    I don't think NVidia care too much about IRay users as it seems to be gaming that drives the progress and VRAM is low on the list of prioities for gaming, from what I understand from reading this forum and various online articles.

  • kenshaw011267kenshaw011267 Posts: 3,805

    The A100 is a completely different beast all together. It doesn't even have dedicated ray tracing cores that the Quadro and consumer cards will. And it is also a low volume product with a high mark up. That makes it an easy choice for Nvidia to use TSMC. A100 is basically an Amped up version of Volta and its true successor. It may as well have a different name. So it is entirely possible that the rest of the lineup is on a totally different fab.

    And why would Nvidia care about making Turing obsolete? They have already stopped production of most Turing products and stock is reducing at stores. Nvidia wants you to buy their new products, not their old products, LOL. And by pushing Ampere high enough, they can even entice those who bought RTX Quadros just last year. If your business is making money using this equipment, you will make that upgrade.

    And Nvidia has done this in the past plenty of times. Even Turing itself made Pascal obsolete for ray tracing applications over night.

    I wouldn't count on the A100 being low volume at that price. Assuming they can actually deliver those look very appealing to lots of our customers (also assuming the performance matches the hype). 

  • nicsttnicstt Posts: 11,715

    I had 8 figures in Blender and they rendered quickly. I forget how long. Admittedly they only had bikinis, but all had strand-based hair (particles in Blender), but there was no optimisation. I did it as a test out of curiosity. None of the characters shared textures either.

  • marblemarble Posts: 7,500
    edited August 2020
    nicstt said:

    I had 8 figures in Blender and they rendered quickly. I forget how long. Admittedly they only had bikinis, but all had strand-based hair (particles in Blender), but there was no optimisation. I did it as a test out of curiosity. None of the characters shared textures either.

    So you are suggesting that Cycles is less demanding in terms of VRAM than IRay? That's interesting. As a matter of further interest, was this with your GPU (if so, what do you have and how much VRAM) or was it with your Threadripper which obviously has access to all your system RAM.

    Back to my Blender tutorials though.

    Post edited by marble on
  • nicsttnicstt Posts: 11,715

    980ti iirc it was a hybrid render, using both. I don't know if it can be less demanding in terms of vram as an uncompressed texture should be the same anywhere? Maybe because Cycles does out of core rendering, there are reduced chances of issues. I have run out of ram on the 980ti, but closing Blender and restarting is considerably faster.

    I'll do one again and post results in Blender thread and reference here; It'll might take me a while to set up.

  • nonesuch00nonesuch00 Posts: 18,320
    marble said:
    Drip said:

    ... and the most important bit: how much VRAM will it have. 8GB is quite decent, and is enough for most of my needs. But, an increase to, say, 12 GB or more could make the 3070 more future proof, as assets seem to get more complex geometry again and texture sizes are slowly getting bigger as well. It's just a matter of time before someone includes 4k nail textures on a model, without considering customers with slower rigs, or the fact that in 99.9% of the renders, one wouldn't see the difference between nails with 256 vs 4k textures. Current designers are generally concious about this, or made it a second nature to optimize textures. I mainly worry about new artists who never had to worry about memory limitations or had no need to think logically about what's necessary.

    I just don't understand this claim at all. I have a 1070 with 8GB and I hit that limit very easily with just three G3/G8 characters and a few props - all optimised with the 4k textures reduced by half. Try to add a fourth G8 (or G3) and I'm back to CPU immediately. If the 3070 (which looks like it will once again be the one for my price range) is offered with only 8GB again I really think I will look for something else to play with in my spare time. 

    I've also seen mention that the answer is compositing but, if I understand the idea correctly, that means rendering out a static background scene and then a scene with the characters. I can't see that workflow working for me because I need to move my characters and move the camera so they are in different positions relative to the background. So I would have to render the background again for each new scene (and I tend to have 60 - 100 scenes in a project). Also, I optimise my props too so my backgrounds (usually indoor rooms with shaders rather than texture maps where possible) take up a small percentage of the total VRAM usage. Human skin realism takes lots of maps and they are mostly 4k maps which is why the Scene Optimiser is my most-used utility.

    I don't think NVidia care too much about IRay users as it seems to be gaming that drives the progress and VRAM is low on the list of prioities for gaming, from what I understand from reading this forum and various online articles.

    When subD are you using? Either use only normal/displacement maps with subD at one or use no normal/displacement with subD at 3 or 4 or even 5 if you want to try.. 

  • marblemarble Posts: 7,500
    marble said:
    Drip said:

    ... and the most important bit: how much VRAM will it have. 8GB is quite decent, and is enough for most of my needs. But, an increase to, say, 12 GB or more could make the 3070 more future proof, as assets seem to get more complex geometry again and texture sizes are slowly getting bigger as well. It's just a matter of time before someone includes 4k nail textures on a model, without considering customers with slower rigs, or the fact that in 99.9% of the renders, one wouldn't see the difference between nails with 256 vs 4k textures. Current designers are generally concious about this, or made it a second nature to optimize textures. I mainly worry about new artists who never had to worry about memory limitations or had no need to think logically about what's necessary.

    I just don't understand this claim at all. I have a 1070 with 8GB and I hit that limit very easily with just three G3/G8 characters and a few props - all optimised with the 4k textures reduced by half. Try to add a fourth G8 (or G3) and I'm back to CPU immediately. If the 3070 (which looks like it will once again be the one for my price range) is offered with only 8GB again I really think I will look for something else to play with in my spare time. 

    I've also seen mention that the answer is compositing but, if I understand the idea correctly, that means rendering out a static background scene and then a scene with the characters. I can't see that workflow working for me because I need to move my characters and move the camera so they are in different positions relative to the background. So I would have to render the background again for each new scene (and I tend to have 60 - 100 scenes in a project). Also, I optimise my props too so my backgrounds (usually indoor rooms with shaders rather than texture maps where possible) take up a small percentage of the total VRAM usage. Human skin realism takes lots of maps and they are mostly 4k maps which is why the Scene Optimiser is my most-used utility.

    I don't think NVidia care too much about IRay users as it seems to be gaming that drives the progress and VRAM is low on the list of prioities for gaming, from what I understand from reading this forum and various online articles.

    When subD are you using? Either use only normal/displacement maps with subD at one or use no normal/displacement with subD at 3 or 4 or even 5 if you want to try.. 

    When I use Scene Optimizer (always) I select  level 3 for the mesh resolution. I have not been deleting Normal/Displacement maps because I'm not sure whether the HD details are identical to the normal/displacement maps - I suspect that they are not. 

  • nicsttnicstt Posts: 11,715
    edited August 2020
    marble said:
    nicstt said:

    I had 8 figures in Blender and they rendered quickly. I forget how long. Admittedly they only had bikinis, but all had strand-based hair (particles in Blender), but there was no optimisation. I did it as a test out of curiosity. None of the characters shared textures either.

    So you are suggesting that Cycles is less demanding in terms of VRAM than IRay? That's interesting. As a matter of further interest, was this with your GPU (if so, what do you have and how much VRAM) or was it with your Threadripper which obviously has access to all your system RAM.

    Back to my Blender tutorials though.

    Partway through; just tested 4 characters and doesn't render; Blender reported nearly 8GB of RAM for card, which is more than it has, and it failed to start. 10 minutes to render on Threadripper only; it wouldn't render in hybrid mode for some reason, thought it did previously but obviously our memories are unreliable.

    Edit:

    Ok something wrong, as wont render anywhere, or it's black. So lights messed up between renders; I'm being dumb - not the first and wont be the last.

    Edit2:

    Ok, that's sorted; interesting that GPUz reports 2880ish MB used for the 4 characters, hair, bikini and HDRI.

    Post edited by nicstt on
  • marblemarble Posts: 7,500
    nicstt said:
    marble said:
    nicstt said:

    I had 8 figures in Blender and they rendered quickly. I forget how long. Admittedly they only had bikinis, but all had strand-based hair (particles in Blender), but there was no optimisation. I did it as a test out of curiosity. None of the characters shared textures either.

    So you are suggesting that Cycles is less demanding in terms of VRAM than IRay? That's interesting. As a matter of further interest, was this with your GPU (if so, what do you have and how much VRAM) or was it with your Threadripper which obviously has access to all your system RAM.

    Back to my Blender tutorials though.

    Partway through; just tested 4 characters and doesn't render; Blender reported nearly 8GB of RAM for card, which is more than it has, and it failed to start. 10 minutes to render on Threadripper only; it wouldn't render in hybrid mode for some reason, thought it did previously but obviously our memories are unreliable.

    Edit:

    Ok something wrong, as wont render anywhere, or it's black. So lights messed up between renders; I'm being dumb - not the first and wont be the last.

    Edit2:

    Ok, that's sorted; interesting that GPUz reports 2880ish MB used for the 4 characters, hair, bikini and HDRI.

    How does that GPU-Z reading compare with IRay? 
     

  • nonesuch00nonesuch00 Posts: 18,320
    marble said:
    marble said:
    Drip said:

    ... and the most important bit: how much VRAM will it have. 8GB is quite decent, and is enough for most of my needs. But, an increase to, say, 12 GB or more could make the 3070 more future proof, as assets seem to get more complex geometry again and texture sizes are slowly getting bigger as well. It's just a matter of time before someone includes 4k nail textures on a model, without considering customers with slower rigs, or the fact that in 99.9% of the renders, one wouldn't see the difference between nails with 256 vs 4k textures. Current designers are generally concious about this, or made it a second nature to optimize textures. I mainly worry about new artists who never had to worry about memory limitations or had no need to think logically about what's necessary.

    I just don't understand this claim at all. I have a 1070 with 8GB and I hit that limit very easily with just three G3/G8 characters and a few props - all optimised with the 4k textures reduced by half. Try to add a fourth G8 (or G3) and I'm back to CPU immediately. If the 3070 (which looks like it will once again be the one for my price range) is offered with only 8GB again I really think I will look for something else to play with in my spare time. 

    I've also seen mention that the answer is compositing but, if I understand the idea correctly, that means rendering out a static background scene and then a scene with the characters. I can't see that workflow working for me because I need to move my characters and move the camera so they are in different positions relative to the background. So I would have to render the background again for each new scene (and I tend to have 60 - 100 scenes in a project). Also, I optimise my props too so my backgrounds (usually indoor rooms with shaders rather than texture maps where possible) take up a small percentage of the total VRAM usage. Human skin realism takes lots of maps and they are mostly 4k maps which is why the Scene Optimiser is my most-used utility.

    I don't think NVidia care too much about IRay users as it seems to be gaming that drives the progress and VRAM is low on the list of prioities for gaming, from what I understand from reading this forum and various online articles.

    When subD are you using? Either use only normal/displacement maps with subD at one or use no normal/displacement with subD at 3 or 4 or even 5 if you want to try.. 

    When I use Scene Optimizer (always) I select  level 3 for the mesh resolution. I have not been deleting Normal/Displacement maps because I'm not sure whether the HD details are identical to the normal/displacement maps - I suspect that they are not. 

    If they have normal maps preset options ON or OFF then after you change subD to 3 you turn the normal maps to OFF with the preset. You are right not to manually mess with the surfaces if you don't know for sure. PAs should have supplied presets to turn the normal details ON or OFF. Don't forget that some geographed addons have also subD sliders and normal maps presets ON or OFF and they do not combine sensibly.

  • nicsttnicstt Posts: 11,715
    marble said:
    nicstt said:
    marble said:
    nicstt said:

    I had 8 figures in Blender and they rendered quickly. I forget how long. Admittedly they only had bikinis, but all had strand-based hair (particles in Blender), but there was no optimisation. I did it as a test out of curiosity. None of the characters shared textures either.

    So you are suggesting that Cycles is less demanding in terms of VRAM than IRay? That's interesting. As a matter of further interest, was this with your GPU (if so, what do you have and how much VRAM) or was it with your Threadripper which obviously has access to all your system RAM.

    Back to my Blender tutorials though.

    Partway through; just tested 4 characters and doesn't render; Blender reported nearly 8GB of RAM for card, which is more than it has, and it failed to start. 10 minutes to render on Threadripper only; it wouldn't render in hybrid mode for some reason, thought it did previously but obviously our memories are unreliable.

    Edit:

    Ok something wrong, as wont render anywhere, or it's black. So lights messed up between renders; I'm being dumb - not the first and wont be the last.

    Edit2:

    Ok, that's sorted; interesting that GPUz reports 2880ish MB used for the 4 characters, hair, bikini and HDRI.

    How does that GPU-Z reading compare with IRay? 
     

    I didn't check; I never bothered after initially finding that there were descrepencies.

  • marblemarble Posts: 7,500
    marble said:
    marble said:
    Drip said:

    ... and the most important bit: how much VRAM will it have. 8GB is quite decent, and is enough for most of my needs. But, an increase to, say, 12 GB or more could make the 3070 more future proof, as assets seem to get more complex geometry again and texture sizes are slowly getting bigger as well. It's just a matter of time before someone includes 4k nail textures on a model, without considering customers with slower rigs, or the fact that in 99.9% of the renders, one wouldn't see the difference between nails with 256 vs 4k textures. Current designers are generally concious about this, or made it a second nature to optimize textures. I mainly worry about new artists who never had to worry about memory limitations or had no need to think logically about what's necessary.

    I just don't understand this claim at all. I have a 1070 with 8GB and I hit that limit very easily with just three G3/G8 characters and a few props - all optimised with the 4k textures reduced by half. Try to add a fourth G8 (or G3) and I'm back to CPU immediately. If the 3070 (which looks like it will once again be the one for my price range) is offered with only 8GB again I really think I will look for something else to play with in my spare time. 

    I've also seen mention that the answer is compositing but, if I understand the idea correctly, that means rendering out a static background scene and then a scene with the characters. I can't see that workflow working for me because I need to move my characters and move the camera so they are in different positions relative to the background. So I would have to render the background again for each new scene (and I tend to have 60 - 100 scenes in a project). Also, I optimise my props too so my backgrounds (usually indoor rooms with shaders rather than texture maps where possible) take up a small percentage of the total VRAM usage. Human skin realism takes lots of maps and they are mostly 4k maps which is why the Scene Optimiser is my most-used utility.

    I don't think NVidia care too much about IRay users as it seems to be gaming that drives the progress and VRAM is low on the list of prioities for gaming, from what I understand from reading this forum and various online articles.

    When subD are you using? Either use only normal/displacement maps with subD at one or use no normal/displacement with subD at 3 or 4 or even 5 if you want to try.. 

    When I use Scene Optimizer (always) I select  level 3 for the mesh resolution. I have not been deleting Normal/Displacement maps because I'm not sure whether the HD details are identical to the normal/displacement maps - I suspect that they are not. 

    If they have normal maps preset options ON or OFF then after you change subD to 3 you turn the normal maps to OFF with the preset. You are right not to manually mess with the surfaces if you don't know for sure. PAs should have supplied presets to turn the normal details ON or OFF. Don't forget that some geographed addons have also subD sliders and normal maps presets ON or OFF and they do not combine sensibly.

    Correct, some products do have options to use either Normal Maps or HD and I have not been acting on that. I'm not sure how much difference it will make but I'll look out for that in future.

  • kyoto kidkyoto kid Posts: 41,256
    nicstt said:

    980ti iirc it was a hybrid render, using both. I don't know if it can be less demanding in terms of vram as an uncompressed texture should be the same anywhere? Maybe because Cycles does out of core rendering, there are reduced chances of issues. I have run out of ram on the 980ti, but closing Blender and restarting is considerably faster.

    I'll do one again and post results in Blender thread and reference here; It'll might take me a while to set up.

    ...I thought though that Cycles was a non GPU based render engine.

  • marblemarble Posts: 7,500
    edited August 2020
    kyoto kid said:
    nicstt said:

    980ti iirc it was a hybrid render, using both. I don't know if it can be less demanding in terms of vram as an uncompressed texture should be the same anywhere? Maybe because Cycles does out of core rendering, there are reduced chances of issues. I have run out of ram on the 980ti, but closing Blender and restarting is considerably faster.

    I'll do one again and post results in Blender thread and reference here; It'll might take me a while to set up.

    ...I thought though that Cycles was a non GPU based render engine.

    It can use CPU and/or GPU ...

    Cycles.jpg
    827 x 688 - 73K
    Post edited by marble on
  • marblemarble Posts: 7,500
    edited August 2020
    marble said:
    marble said:
    Drip said:

    ... and the most important bit: how much VRAM will it have. 8GB is quite decent, and is enough for most of my needs. But, an increase to, say, 12 GB or more could make the 3070 more future proof, as assets seem to get more complex geometry again and texture sizes are slowly getting bigger as well. It's just a matter of time before someone includes 4k nail textures on a model, without considering customers with slower rigs, or the fact that in 99.9% of the renders, one wouldn't see the difference between nails with 256 vs 4k textures. Current designers are generally concious about this, or made it a second nature to optimize textures. I mainly worry about new artists who never had to worry about memory limitations or had no need to think logically about what's necessary.

    I just don't understand this claim at all. I have a 1070 with 8GB and I hit that limit very easily with just three G3/G8 characters and a few props - all optimised with the 4k textures reduced by half. Try to add a fourth G8 (or G3) and I'm back to CPU immediately. If the 3070 (which looks like it will once again be the one for my price range) is offered with only 8GB again I really think I will look for something else to play with in my spare time. 

    I've also seen mention that the answer is compositing but, if I understand the idea correctly, that means rendering out a static background scene and then a scene with the characters. I can't see that workflow working for me because I need to move my characters and move the camera so they are in different positions relative to the background. So I would have to render the background again for each new scene (and I tend to have 60 - 100 scenes in a project). Also, I optimise my props too so my backgrounds (usually indoor rooms with shaders rather than texture maps where possible) take up a small percentage of the total VRAM usage. Human skin realism takes lots of maps and they are mostly 4k maps which is why the Scene Optimiser is my most-used utility.

    I don't think NVidia care too much about IRay users as it seems to be gaming that drives the progress and VRAM is low on the list of prioities for gaming, from what I understand from reading this forum and various online articles.

    When subD are you using? Either use only normal/displacement maps with subD at one or use no normal/displacement with subD at 3 or 4 or even 5 if you want to try.. 

    When I use Scene Optimizer (always) I select  level 3 for the mesh resolution. I have not been deleting Normal/Displacement maps because I'm not sure whether the HD details are identical to the normal/displacement maps - I suspect that they are not. 

    If they have normal maps preset options ON or OFF then after you change subD to 3 you turn the normal maps to OFF with the preset. You are right not to manually mess with the surfaces if you don't know for sure. PAs should have supplied presets to turn the normal details ON or OFF. Don't forget that some geographed addons have also subD sliders and normal maps presets ON or OFF and they do not combine sensibly.

    I did a couple of experiments in IRay using a skin which has normal maps and HD  (Marilla by iSouceTextures). Just the G8F, no hair or clothing although I did have geo-grafts as this would be normal for my characters.

    1. Marilla G8F skin with HD level 3 and Normal maps applied - GPU-Z reported VRAM at 4600MB.

    2. HD level 3 but removed all normal maps - GPU-Z: 4300MB

    3. HD Level 4 with normal maps still removed - GPU-Z: 5200MB.

    So this tells me that HD level makes a considerable difference to VRAm usage. Indeed, more so than removing the 4K normal maps.

    [EDIT] I loaded the base Marilla (i.e. no geo-grafts) and the difference was negligable as the geo-grafts seem to take up only about 60MB.

    Post edited by marble on
  • kyoto kidkyoto kid Posts: 41,256

    ...OK, thanks.  Guess it's time to try out that Blender/Daz bridge. 

  • i53570ki53570k Posts: 212
    marble said:
    marble said:
    marble said:
    Drip said:

    ... and the most important bit: how much VRAM will it have. 8GB is quite decent, and is enough for most of my needs. But, an increase to, say, 12 GB or more could make the 3070 more future proof, as assets seem to get more complex geometry again and texture sizes are slowly getting bigger as well. It's just a matter of time before someone includes 4k nail textures on a model, without considering customers with slower rigs, or the fact that in 99.9% of the renders, one wouldn't see the difference between nails with 256 vs 4k textures. Current designers are generally concious about this, or made it a second nature to optimize textures. I mainly worry about new artists who never had to worry about memory limitations or had no need to think logically about what's necessary.

    I just don't understand this claim at all. I have a 1070 with 8GB and I hit that limit very easily with just three G3/G8 characters and a few props - all optimised with the 4k textures reduced by half. Try to add a fourth G8 (or G3) and I'm back to CPU immediately. If the 3070 (which looks like it will once again be the one for my price range) is offered with only 8GB again I really think I will look for something else to play with in my spare time. 

    I've also seen mention that the answer is compositing but, if I understand the idea correctly, that means rendering out a static background scene and then a scene with the characters. I can't see that workflow working for me because I need to move my characters and move the camera so they are in different positions relative to the background. So I would have to render the background again for each new scene (and I tend to have 60 - 100 scenes in a project). Also, I optimise my props too so my backgrounds (usually indoor rooms with shaders rather than texture maps where possible) take up a small percentage of the total VRAM usage. Human skin realism takes lots of maps and they are mostly 4k maps which is why the Scene Optimiser is my most-used utility.

    I don't think NVidia care too much about IRay users as it seems to be gaming that drives the progress and VRAM is low on the list of prioities for gaming, from what I understand from reading this forum and various online articles.

    When subD are you using? Either use only normal/displacement maps with subD at one or use no normal/displacement with subD at 3 or 4 or even 5 if you want to try.. 

    When I use Scene Optimizer (always) I select  level 3 for the mesh resolution. I have not been deleting Normal/Displacement maps because I'm not sure whether the HD details are identical to the normal/displacement maps - I suspect that they are not. 

    If they have normal maps preset options ON or OFF then after you change subD to 3 you turn the normal maps to OFF with the preset. You are right not to manually mess with the surfaces if you don't know for sure. PAs should have supplied presets to turn the normal details ON or OFF. Don't forget that some geographed addons have also subD sliders and normal maps presets ON or OFF and they do not combine sensibly.

    I did a couple of experiments in IRay using a skin which has normal maps and HD  (Marilla by iSouceTextures). Just the G8F, no hair or clothing although I did have geo-grafts as this would be normal for my characters.

    1. Marilla G8F skin with HD level 3 and Normal maps applied - GPU-Z reported VRAM at 4600MB.

    2. HD level 3 but removed all normal maps - GPU-Z: 4300MB

    3. HD Level 4 with normal maps still removed - GPU-Z: 5200MB.

    So this tells me that HD level makes a considerable difference to VRAm usage. Indeed, more so than removing the 4K normal maps.

    [EDIT] I loaded the base Marilla (i.e. no geo-grafts) and the difference was negligable as the geo-grafts seem to take up only about 60MB.

    Have you tested the difference in rendering time?  Iray will render much faster without normal maps so you are trading VRAM for rendering speed by switching from using nromal maps to higher subD.

     

  • marblemarble Posts: 7,500
    edited August 2020
    i53570k said:
    marble said:
    marble said:
    marble said:
    Drip said:

    ... and the most important bit: how much VRAM will it have. 8GB is quite decent, and is enough for most of my needs. But, an increase to, say, 12 GB or more could make the 3070 more future proof, as assets seem to get more complex geometry again and texture sizes are slowly getting bigger as well. It's just a matter of time before someone includes 4k nail textures on a model, without considering customers with slower rigs, or the fact that in 99.9% of the renders, one wouldn't see the difference between nails with 256 vs 4k textures. Current designers are generally concious about this, or made it a second nature to optimize textures. I mainly worry about new artists who never had to worry about memory limitations or had no need to think logically about what's necessary.

    I just don't understand this claim at all. I have a 1070 with 8GB and I hit that limit very easily with just three G3/G8 characters and a few props - all optimised with the 4k textures reduced by half. Try to add a fourth G8 (or G3) and I'm back to CPU immediately. If the 3070 (which looks like it will once again be the one for my price range) is offered with only 8GB again I really think I will look for something else to play with in my spare time. 

    I've also seen mention that the answer is compositing but, if I understand the idea correctly, that means rendering out a static background scene and then a scene with the characters. I can't see that workflow working for me because I need to move my characters and move the camera so they are in different positions relative to the background. So I would have to render the background again for each new scene (and I tend to have 60 - 100 scenes in a project). Also, I optimise my props too so my backgrounds (usually indoor rooms with shaders rather than texture maps where possible) take up a small percentage of the total VRAM usage. Human skin realism takes lots of maps and they are mostly 4k maps which is why the Scene Optimiser is my most-used utility.

    I don't think NVidia care too much about IRay users as it seems to be gaming that drives the progress and VRAM is low on the list of prioities for gaming, from what I understand from reading this forum and various online articles.

    When subD are you using? Either use only normal/displacement maps with subD at one or use no normal/displacement with subD at 3 or 4 or even 5 if you want to try.. 

    When I use Scene Optimizer (always) I select  level 3 for the mesh resolution. I have not been deleting Normal/Displacement maps because I'm not sure whether the HD details are identical to the normal/displacement maps - I suspect that they are not. 

    If they have normal maps preset options ON or OFF then after you change subD to 3 you turn the normal maps to OFF with the preset. You are right not to manually mess with the surfaces if you don't know for sure. PAs should have supplied presets to turn the normal details ON or OFF. Don't forget that some geographed addons have also subD sliders and normal maps presets ON or OFF and they do not combine sensibly.

    I did a couple of experiments in IRay using a skin which has normal maps and HD  (Marilla by iSouceTextures). Just the G8F, no hair or clothing although I did have geo-grafts as this would be normal for my characters.

    1. Marilla G8F skin with HD level 3 and Normal maps applied - GPU-Z reported VRAM at 4600MB.

    2. HD level 3 but removed all normal maps - GPU-Z: 4300MB

    3. HD Level 4 with normal maps still removed - GPU-Z: 5200MB.

    So this tells me that HD level makes a considerable difference to VRAm usage. Indeed, more so than removing the 4K normal maps.

    [EDIT] I loaded the base Marilla (i.e. no geo-grafts) and the difference was negligable as the geo-grafts seem to take up only about 60MB.

    Have you tested the difference in rendering time?  Iray will render much faster without normal maps so you are trading VRAM for rendering speed by switching from using nromal maps to higher subD.

     

    I didn't time it but neither did I notice much difference.

     

    The point is - for me at least - that VRAM is the severe limiting factor when it comes to creating scenes. My creativity is focused on trying to tell the story without the characters or the scenery that I would like to use. So I do things like have three characters in a scene looking or speaking towards another character supposedly out of shot, and then in the next scene I bring in that character and lose one of the others. I have bought and returned products because they are just too VRAM heavy. Some really impressive products that I would love to include in renders but they would leave little to no space for my characters.

    So I'm really hoping that the 16GB 3070 is not a baseless rumour but if it is, I'll have to look for alternatives. Maybe a 2nd hand Titan or maybe just find another hobby.

    Post edited by marble on
Sign In or Register to comment.