Nvidia Ampere (2080 Ti, etc. replacements) and other rumors...

1242527293045

Comments

  • nonesuch00nonesuch00 Posts: 18,320

    Except Nvidia has known about AMD's launch well in advance. They have to had known for at least as long as we have, and probably much longer, that AMD was looking to October/November, that they could compete, and that they might be offering 16gb models.

    So launching without any 20gb model, staying quiet about a 20gb model, and then just popping it out after AMD launches would be extraordinarily short sited on Nvidia's part. They wouldn't fool anybody with that ploy.

    Making your customers unhappy is usually bad for business. Those are bridges that companies want to avoid burning. It would be a move that hurts them for the long term, so even if they did well for a couple months, it would bite them back after that. So this stuff matters.

    And if selling GPUs was as simple as adding more VRAM, well AMD would have smashed Nvidia long ago. AMD has routinely offered 8gb versions of a wide variety of lower tier cards that beat Nvidia's price to performance over the 1060 and 1050. Nobody cared. AMD also beat Nvidia to 8gb with 390x back in 2015, it didn't matter.

    And really, if was indeed that simple, then why not launch the 20gb right off the bat? The cost to them is not that significant in comparison to what they would charge.

    Also, at these price ranges, many customers are going to know at least a little bit about what they are buying. If they are buying a GPU by itself, they have to at least know how to install it and run it. That in itself requires a little bit of know how, and people that do this tend to be just a little bit more knowledgable about what they are putting in their computers. 

    I have no doubt that nVidia doesn't mind that impatient folk that buy 8GB/10GB models will be forced to upgrade to in many cases to 16GB/20GB models. Releasing with a 8GB/10GB instead of 16GB/20GB models also lets nVidia gauge demand much better and not get burnt with a huge stock of unsold cards that have expensive physically outsourced components used in them. It's much cheaper to have a large stock of unsold 8GB GPUs than a large stock of unsold 16GB GPUs when it's the RAM that is the most expensive cost outside the GPU itself. The RAM likely might even cost more than the GPU as I'm not privy to nVidia's cost numbers.

  • nonesuch00nonesuch00 Posts: 18,320
    edited September 2020
    marble said:
    marble said:
    nicstt said:

    I keep telling folks to wait; although the FE 3080 is still a decent deal, but more RAM is essential in rendering.

    Depends on what you're rendering. Back when I was using a GTX 970 with 4GB it was easier to reach the limit and drop to CPU rendering. With a 1070Ti and 8GB I don't think I ever had issues unless it was some ridiculous set piece or something. The 3080 has 10GB. There's also ways to reduce render mem usage and utilities that can help as well.

     

    I just don't know how you can claim that. I often hit the 8GB limit with only 3 G8/G3 figures and after the whole scene has been run through the Scene Optimiser. It is a constant battle for me to stay within the 8GB limit and I find that very limiting indeed. I too started with a 970 but it lasted only a few weeks before I had to splash out on a 1070 which is what I'm still using. If I had the funds I would go for the new 3090 but I haven't so I'm really hoping that a 16GB 3070 is not far away.

    However, I am looking at other options like using Blender. I've been a bit disappointed that I have not been able to reproduce the IRay quality with Cycles yet but I'm a Blender novice while I've been using IRay for several years now.

    I don't know what you're doing that I'm not but I almost never hit 8 Gb. I keep track of my logs pretty closely and I haven't had a render exceed my 2070 in months. That's almost 100 production renders for my VN's. That's why I didn't pull the trigger on 2 2070 supers to get the NVLink bonus. I could have sold my 1080ti and 2070 and made most of the cost of the cards back but it just didn't make any sense. 

    All I am doing is making scenes with a few (3 or less) characters and a few props. I can't fit the big sets like the landscapes with lots of trees, etc., or interiors with lots of furniture, because they will blow my VRAM limit. I use every trick I can think of to reduce VRAM usage. Scene Optimiser is good but take it a step too far and the seams on the skins begin to show. I have the drop-to-CPU feature disabled so that if it goes over 8GB all I get is an instant black render. That happens far too often for my liking so I'm paranoid about adding too much to the scene and obsessive about reducing the texture sizes.

    So if you would like to share your secret, I'm sure that lots of us here would appreciate it.

    What resolution do you render at? I keep running into people who say 8Gb isn't enough who render at huge resolutions. If that's you try 1080.

    This is a render from a chapter of my current VN that I did. It fit in 8Gb without a problem.

     

     

    Here is a scene that wouldn't even CPU render until I build a new PC with 32GB RAM instead of 16GB RAM. As you see, it's a totally typical everyday average scene that isn't out of the ordinary for what DAZ 3D might expect customers to render using the models they sell.

    It was rendered at FHD (1920x1280) but got reduced in size when I uploaded it to the DAZ Galleries.

    I won't be trying to GPU render it until I get one of the GeForce RTX 30X0 GPUs. I know that iRay rendering with an nVidia GPU causes certain optimizations that don't happen with a CPU render but I don't think it's going to be able to optimize a render that takes over 24GB CPU rendering into 8 GB RAM on a GPU video card.

    No, it isn't. I thought this was well known. Those curved surface emissives are massive VRAM hogs. You can get away with one or two but you've got what a dozen?

    So are not emmisives part of iRay what customers can expect to be used in a typical DAZ product? Must be because emmissives were sold as part of that product.

    You know part of being knowledgable professional is knowing that a consumer oriented product for hobbyists is aimed at customers that don't read technical archana about emissives and such things. They buy a product and use it as is because that's what sold the product to the customer. They don't buy a car and then procede to rebuild the engine in their brand new car.

    That product was designed that way. That is why DAZ 3D and those PAs should expect that a product they design and sell winds up getting used exactly in a manner as they designed it to be used. You seem to think it's Joe Average's fault for not designing the product themselves to avoid those emmisives.

    Fact is that's a perfectly average and legitimate scene that DAZ 3D, game engine companies, 3D companies and video GPU designers and PC designers can expect average hobby consumers to construct using COTs products they are selling to us. They came to us with their advertising we didn't go to them and they said we can do those things so count on it: we customers expect that hardware, software, and digital models to be able to do those things. They need to up their game. The era of constructing 3D scenes like PC users used to load programs in 512K of RAM in DOS days are over. They need to get on the stick! laugh 

    Post edited by nonesuch00 on
  • outrider42outrider42 Posts: 3,679

    For once I agree with kenshaw on something, LOL. People don't have to be very tech savvy to feel screwed, and they don't have to be launch window buyers. Just seeing Nvidia release a new set of GPUs weeks after that suddenly doubled VRAM would be huge red flags for many customers. Nvidia announced the 3070, and that product isn't coming until October. Yet they could not bother to tell us that double VRAM capacities would be coming in November?

    That just doesn't work. And maybe if AMD blows the doors off Nvidia that force their hand, but again, That still would point to very poor planning. Rumors about AMD being powerful and having 16gb have existed for months. A leaked AMD benchmark dates all the way back to early January of this year. Early January! That is an eternity in tech time. And yes, these companies absolutely do watch each other very closely, books have been written about the things these companies do. Why is this a surprise?

    We have some new information with a leaked Chinese review of the 3090. It is not very spectacular. In the games they tested, it was only 10% faster than the 3080, and some games it was only 5%. If this is indeed true, it places all talk of a 3080 Super or 3080ti on hold. How on earth could a Super or ti exist if there is only 10% difference between the 3080 and 3090? The only thing they could do is add the extra VRAM and maybe make the VRAM faster, since GDDR6X can hit 21 per second. The current 3080 is rated for 19. That would add a little performance to the card, but not much. 

    But this review also casts doubt on 20gb models because the 3090 is so expensive for so little gain. A 3080 with 20gb would make the 3090 pretty much irrelevant since it is so close in performance. It could be possible to overclock a 3080 to hit 3090 performance. The 3090 would need Titan features to justify its price over a 20gb 3080. But right now, it doesn't look like it has any Titan features. The only feature over the 3080 it has is SLI Nvlink.

    So now releasing a 20gb 3080 so soon would really sour those who bought a 3090, not just 3080 buyers.

    Obviously the tech industry is always moving forward at a rapid rate. But this would be too fast and burn a whole of people. The effect would hit catch up to them. That is why these extra VRAM models launch 2021. That gives them time.

    Jenson Huang made a pretty strong statement during his Ampere announcement. He said rather emphatically "To all my Pascal friends, it's safe to upgrade now". What curious wording he chose. He was pitching Ampere not just to those who bought Turing, but to those who stuck to Pascal. This was a very telling statement, because Pascal sold a lot better than Turing. Various Pascal cards dominate the Steam survey. Nvidia really wants to get these people to upgrade. But this group of people is choosey. They skipped Turing because they didn't like the value. So Huang was trying to make his pitch to them. Now just imagine after telling Pascal owners it is "safe to upgrade" that they double VRAM 8 weeks later. People who have had their cards for as long as 4 years were specifically told it was safe to upgrade.

    Boy that wouldn't go over so well. This statement would get mocked. It might even become immortalized...as a meme.

    That would be bad for business.

  • bluejauntebluejaunte Posts: 1,923

    Wouldn't really have to be 8 weeks later. First quarter 2021 why not.

  • marble said:
    marble said:
    nicstt said:

    I keep telling folks to wait; although the FE 3080 is still a decent deal, but more RAM is essential in rendering.

    Depends on what you're rendering. Back when I was using a GTX 970 with 4GB it was easier to reach the limit and drop to CPU rendering. With a 1070Ti and 8GB I don't think I ever had issues unless it was some ridiculous set piece or something. The 3080 has 10GB. There's also ways to reduce render mem usage and utilities that can help as well.

     

    I just don't know how you can claim that. I often hit the 8GB limit with only 3 G8/G3 figures and after the whole scene has been run through the Scene Optimiser. It is a constant battle for me to stay within the 8GB limit and I find that very limiting indeed. I too started with a 970 but it lasted only a few weeks before I had to splash out on a 1070 which is what I'm still using. If I had the funds I would go for the new 3090 but I haven't so I'm really hoping that a 16GB 3070 is not far away.

    However, I am looking at other options like using Blender. I've been a bit disappointed that I have not been able to reproduce the IRay quality with Cycles yet but I'm a Blender novice while I've been using IRay for several years now.

    I don't know what you're doing that I'm not but I almost never hit 8 Gb. I keep track of my logs pretty closely and I haven't had a render exceed my 2070 in months. That's almost 100 production renders for my VN's. That's why I didn't pull the trigger on 2 2070 supers to get the NVLink bonus. I could have sold my 1080ti and 2070 and made most of the cost of the cards back but it just didn't make any sense. 

    All I am doing is making scenes with a few (3 or less) characters and a few props. I can't fit the big sets like the landscapes with lots of trees, etc., or interiors with lots of furniture, because they will blow my VRAM limit. I use every trick I can think of to reduce VRAM usage. Scene Optimiser is good but take it a step too far and the seams on the skins begin to show. I have the drop-to-CPU feature disabled so that if it goes over 8GB all I get is an instant black render. That happens far too often for my liking so I'm paranoid about adding too much to the scene and obsessive about reducing the texture sizes.

    So if you would like to share your secret, I'm sure that lots of us here would appreciate it.

    What resolution do you render at? I keep running into people who say 8Gb isn't enough who render at huge resolutions. If that's you try 1080.

    This is a render from a chapter of my current VN that I did. It fit in 8Gb without a problem.

     

     

    Here is a scene that wouldn't even CPU render until I build a new PC with 32GB RAM instead of 16GB RAM. As you see, it's a totally typical everyday average scene that isn't out of the ordinary for what DAZ 3D might expect customers to render using the models they sell.

    It was rendered at FHD (1920x1280) but got reduced in size when I uploaded it to the DAZ Galleries.

    I won't be trying to GPU render it until I get one of the GeForce RTX 30X0 GPUs. I know that iRay rendering with an nVidia GPU causes certain optimizations that don't happen with a CPU render but I don't think it's going to be able to optimize a render that takes over 24GB CPU rendering into 8 GB RAM on a GPU video card.

    No, it isn't. I thought this was well known. Those curved surface emissives are massive VRAM hogs. You can get away with one or two but you've got what a dozen?

    So are not emmisives part of iRay what customers can expect to be used in a typical DAZ product? Must be because emmissives where sold as part of that product.

    You know part of being knowledgable professional is knowing that a consumer oriented product for hobbyists is aimed at customers that don't read technical archana about emissives and such things. They buy a product and use it as is because that's what sold the product to the customer. They don't buy a car and then procede to rebuild the engine in their brand new car.

    That product was designed that way. That is why DAZ 3D and those PAs should expect that a product they design and sell winds up getting used exactly in a manner as they designed it to be used. You seem to think it's Joe Average's fault for not designing the product themselves to avoid those emmisives.

    Fact is that's a perfectly average and legitimate scene that DAZ 3D, game engine companies, 3D companies and video GPU designers and PC designers can expect average hobby consumers to construct using COTs products they are selling to us. They came to us with their advertising we didn't go to them and they said we can do those things so count on it: we customers expect that hardware, software, and digital models to be able to do those things. They need to up their game. The era of constructing 3D scenes like PC users used to load programs in 512K of RAM in DOS days are over. They need to get on the stick! laugh 

    Fact is you spend time on the forums. People talk about curved emissive surfaces being an issue here a lot, that's where I first read about it and tested it for myself. Also you had a render drop to CPU and didn't try to optimize it. Why not? Seems pretty clear to me that the only unusual thing in the scene is the emissives so they'd be the thing I'd start looking at first.

  • marblemarble Posts: 7,500
    edited September 2020
    marble said:
    marble said:
    nicstt said:

    I keep telling folks to wait; although the FE 3080 is still a decent deal, but more RAM is essential in rendering.

    Depends on what you're rendering. Back when I was using a GTX 970 with 4GB it was easier to reach the limit and drop to CPU rendering. With a 1070Ti and 8GB I don't think I ever had issues unless it was some ridiculous set piece or something. The 3080 has 10GB. There's also ways to reduce render mem usage and utilities that can help as well.

     

    I just don't know how you can claim that. I often hit the 8GB limit with only 3 G8/G3 figures and after the whole scene has been run through the Scene Optimiser. It is a constant battle for me to stay within the 8GB limit and I find that very limiting indeed. I too started with a 970 but it lasted only a few weeks before I had to splash out on a 1070 which is what I'm still using. If I had the funds I would go for the new 3090 but I haven't so I'm really hoping that a 16GB 3070 is not far away.

    However, I am looking at other options like using Blender. I've been a bit disappointed that I have not been able to reproduce the IRay quality with Cycles yet but I'm a Blender novice while I've been using IRay for several years now.

    I don't know what you're doing that I'm not but I almost never hit 8 Gb. I keep track of my logs pretty closely and I haven't had a render exceed my 2070 in months. That's almost 100 production renders for my VN's. That's why I didn't pull the trigger on 2 2070 supers to get the NVLink bonus. I could have sold my 1080ti and 2070 and made most of the cost of the cards back but it just didn't make any sense. 

    All I am doing is making scenes with a few (3 or less) characters and a few props. I can't fit the big sets like the landscapes with lots of trees, etc., or interiors with lots of furniture, because they will blow my VRAM limit. I use every trick I can think of to reduce VRAM usage. Scene Optimiser is good but take it a step too far and the seams on the skins begin to show. I have the drop-to-CPU feature disabled so that if it goes over 8GB all I get is an instant black render. That happens far too often for my liking so I'm paranoid about adding too much to the scene and obsessive about reducing the texture sizes.

    So if you would like to share your secret, I'm sure that lots of us here would appreciate it.

    What resolution do you render at? I keep running into people who say 8Gb isn't enough who render at huge resolutions. If that's you try 1080.

    This is a render from a chapter of my current VN that I did. It fit in 8Gb without a problem.

     

     

    Here is a scene that wouldn't even CPU render until I build a new PC with 32GB RAM instead of 16GB RAM. As you see, it's a totally typical everyday average scene that isn't out of the ordinary for what DAZ 3D might expect customers to render using the models they sell.

    It was rendered at FHD (1920x1280) but got reduced in size when I uploaded it to the DAZ Galleries.

    I won't be trying to GPU render it until I get one of the GeForce RTX 30X0 GPUs. I know that iRay rendering with an nVidia GPU causes certain optimizations that don't happen with a CPU render but I don't think it's going to be able to optimize a render that takes over 24GB CPU rendering into 8 GB RAM on a GPU video card.

    No, it isn't. I thought this was well known. Those curved surface emissives are massive VRAM hogs. You can get away with one or two but you've got what a dozen?

    So are not emmisives part of iRay what customers can expect to be used in a typical DAZ product? Must be because emmissives where sold as part of that product.

    You know part of being knowledgable professional is knowing that a consumer oriented product for hobbyists is aimed at customers that don't read technical archana about emissives and such things. They buy a product and use it as is because that's what sold the product to the customer. They don't buy a car and then procede to rebuild the engine in their brand new car.

    That product was designed that way. That is why DAZ 3D and those PAs should expect that a product they design and sell winds up getting used exactly in a manner as they designed it to be used. You seem to think it's Joe Average's fault for not designing the product themselves to avoid those emmisives.

    Fact is that's a perfectly average and legitimate scene that DAZ 3D, game engine companies, 3D companies and video GPU designers and PC designers can expect average hobby consumers to construct using COTs products they are selling to us. They came to us with their advertising we didn't go to them and they said we can do those things so count on it: we customers expect that hardware, software, and digital models to be able to do those things. They need to up their game. The era of constructing 3D scenes like PC users used to load programs in 512K of RAM in DOS days are over. They need to get on the stick! laugh 

    Fact is you spend time on the forums. People talk about curved emissive surfaces being an issue here a lot, that's where I first read about it and tested it for myself. Also you had a render drop to CPU and didn't try to optimize it. Why not? Seems pretty clear to me that the only unusual thing in the scene is the emissives so they'd be the thing I'd start looking at first.

    Perhaps it is a matter of context. I have also read about emissive spheres, etc., on the forum but usually in the context of rendering speed, not VRAM. As I understand it, the sphere has lots of small polygons and each of these are emissive compared to a flat plane which can be a single polygon. So IRay has to calculate the light distribution from each of those little polygons instead of just the one. I have never thought of a couple of hundred polygons as being a huge hog of VRAM however but I am open to being educated otherwise.

    Post edited by marble on
  • nicsttnicstt Posts: 11,715

    Except Nvidia has known about AMD's launch well in advance. They have to had known for at least as long as we have, and probably much longer, that AMD was looking to October/November, that they could compete, and that they might be offering 16gb models.

    So launching without any 20gb model, staying quiet about a 20gb model, and then just popping it out after AMD launches would be extraordinarily short sited on Nvidia's part. They wouldn't fool anybody with that ploy.

    Making your customers unhappy is usually bad for business. Those are bridges that companies want to avoid burning. It would be a move that hurts them for the long term, so even if they did well for a couple months, it would bite them back after that. So this stuff matters.

    And if selling GPUs was as simple as adding more VRAM, well AMD would have smashed Nvidia long ago. AMD has routinely offered 8gb versions of a wide variety of lower tier cards that beat Nvidia's price to performance over the 1060 and 1050. Nobody cared. AMD also beat Nvidia to 8gb with 390x back in 2015, it didn't matter.

    And really, if was indeed that simple, then why not launch the 20gb right off the bat? The cost to them is not that significant in comparison to what they would charge.

    Also, at these price ranges, many customers are going to know at least a little bit about what they are buying. If they are buying a GPU by itself, they have to at least know how to install it and run it. That in itself requires a little bit of know how, and people that do this tend to be just a little bit more knowledgable about what they are putting in their computers. 

    I have no doubt that nVidia doesn't mind that impatient folk that buy 8GB/10GB models will be forced to upgrade to in many cases to 16GB/20GB models. Releasing with a 8GB/10GB instead of 16GB/20GB models also lets nVidia gauge demand much better and not get burnt with a huge stock of unsold cards that have expensive physically outsourced components used in them. It's much cheaper to have a large stock of unsold 8GB GPUs than a large stock of unsold 16GB GPUs when it's the RAM that is the most expensive cost outside the GPU itself. The RAM likely might even cost more than the GPU as I'm not privy to nVidia's cost numbers.

    The FE card cooling systems are I believe $150-180; how accurate this is I have no idea, but it would explain why I've also 'heard' that Nvidia doesn't make much from them.

  • nicsttnicstt Posts: 11,715
    marble said:
    marble said:
    marble said:
    nicstt said:

    I keep telling folks to wait; although the FE 3080 is still a decent deal, but more RAM is essential in rendering.

    Depends on what you're rendering. Back when I was using a GTX 970 with 4GB it was easier to reach the limit and drop to CPU rendering. With a 1070Ti and 8GB I don't think I ever had issues unless it was some ridiculous set piece or something. The 3080 has 10GB. There's also ways to reduce render mem usage and utilities that can help as well.

     

    I just don't know how you can claim that. I often hit the 8GB limit with only 3 G8/G3 figures and after the whole scene has been run through the Scene Optimiser. It is a constant battle for me to stay within the 8GB limit and I find that very limiting indeed. I too started with a 970 but it lasted only a few weeks before I had to splash out on a 1070 which is what I'm still using. If I had the funds I would go for the new 3090 but I haven't so I'm really hoping that a 16GB 3070 is not far away.

    However, I am looking at other options like using Blender. I've been a bit disappointed that I have not been able to reproduce the IRay quality with Cycles yet but I'm a Blender novice while I've been using IRay for several years now.

    I don't know what you're doing that I'm not but I almost never hit 8 Gb. I keep track of my logs pretty closely and I haven't had a render exceed my 2070 in months. That's almost 100 production renders for my VN's. That's why I didn't pull the trigger on 2 2070 supers to get the NVLink bonus. I could have sold my 1080ti and 2070 and made most of the cost of the cards back but it just didn't make any sense. 

    All I am doing is making scenes with a few (3 or less) characters and a few props. I can't fit the big sets like the landscapes with lots of trees, etc., or interiors with lots of furniture, because they will blow my VRAM limit. I use every trick I can think of to reduce VRAM usage. Scene Optimiser is good but take it a step too far and the seams on the skins begin to show. I have the drop-to-CPU feature disabled so that if it goes over 8GB all I get is an instant black render. That happens far too often for my liking so I'm paranoid about adding too much to the scene and obsessive about reducing the texture sizes.

    So if you would like to share your secret, I'm sure that lots of us here would appreciate it.

    What resolution do you render at? I keep running into people who say 8Gb isn't enough who render at huge resolutions. If that's you try 1080.

    This is a render from a chapter of my current VN that I did. It fit in 8Gb without a problem.

     

     

    Here is a scene that wouldn't even CPU render until I build a new PC with 32GB RAM instead of 16GB RAM. As you see, it's a totally typical everyday average scene that isn't out of the ordinary for what DAZ 3D might expect customers to render using the models they sell.

    It was rendered at FHD (1920x1280) but got reduced in size when I uploaded it to the DAZ Galleries.

    I won't be trying to GPU render it until I get one of the GeForce RTX 30X0 GPUs. I know that iRay rendering with an nVidia GPU causes certain optimizations that don't happen with a CPU render but I don't think it's going to be able to optimize a render that takes over 24GB CPU rendering into 8 GB RAM on a GPU video card.

    No, it isn't. I thought this was well known. Those curved surface emissives are massive VRAM hogs. You can get away with one or two but you've got what a dozen?

    So are not emmisives part of iRay what customers can expect to be used in a typical DAZ product? Must be because emmissives where sold as part of that product.

    You know part of being knowledgable professional is knowing that a consumer oriented product for hobbyists is aimed at customers that don't read technical archana about emissives and such things. They buy a product and use it as is because that's what sold the product to the customer. They don't buy a car and then procede to rebuild the engine in their brand new car.

    That product was designed that way. That is why DAZ 3D and those PAs should expect that a product they design and sell winds up getting used exactly in a manner as they designed it to be used. You seem to think it's Joe Average's fault for not designing the product themselves to avoid those emmisives.

    Fact is that's a perfectly average and legitimate scene that DAZ 3D, game engine companies, 3D companies and video GPU designers and PC designers can expect average hobby consumers to construct using COTs products they are selling to us. They came to us with their advertising we didn't go to them and they said we can do those things so count on it: we customers expect that hardware, software, and digital models to be able to do those things. They need to up their game. The era of constructing 3D scenes like PC users used to load programs in 512K of RAM in DOS days are over. They need to get on the stick! laugh 

    Fact is you spend time on the forums. People talk about curved emissive surfaces being an issue here a lot, that's where I first read about it and tested it for myself. Also you had a render drop to CPU and didn't try to optimize it. Why not? Seems pretty clear to me that the only unusual thing in the scene is the emissives so they'd be the thing I'd start looking at first.

    Perhaps it is a matter of context. I have also read about emissive spheres, etc., on the forum but usually in the context of rendering speed, not VRAM. As I understand it, the sphere has lots of small polygons and each of these are emissive compared to a flat plane which can be a single polygon. So IRay has to calculate the light distribution from each of those little polygons instead of just the one. I have never thought of a couple of hundred polygons as being a huge hog of VRAM however but I am open to being educated otherwise.

    I certainly wouldn't use spheres as an emissive.

    If I want a sphere I put a light inside and make it glass or partially alphaed. The couple of times I tried that method seemed to be faster (in Blender).

  • marble said:
    marble said:
    marble said:
    nicstt said:

    I keep telling folks to wait; although the FE 3080 is still a decent deal, but more RAM is essential in rendering.

    Depends on what you're rendering. Back when I was using a GTX 970 with 4GB it was easier to reach the limit and drop to CPU rendering. With a 1070Ti and 8GB I don't think I ever had issues unless it was some ridiculous set piece or something. The 3080 has 10GB. There's also ways to reduce render mem usage and utilities that can help as well.

     

    I just don't know how you can claim that. I often hit the 8GB limit with only 3 G8/G3 figures and after the whole scene has been run through the Scene Optimiser. It is a constant battle for me to stay within the 8GB limit and I find that very limiting indeed. I too started with a 970 but it lasted only a few weeks before I had to splash out on a 1070 which is what I'm still using. If I had the funds I would go for the new 3090 but I haven't so I'm really hoping that a 16GB 3070 is not far away.

    However, I am looking at other options like using Blender. I've been a bit disappointed that I have not been able to reproduce the IRay quality with Cycles yet but I'm a Blender novice while I've been using IRay for several years now.

    I don't know what you're doing that I'm not but I almost never hit 8 Gb. I keep track of my logs pretty closely and I haven't had a render exceed my 2070 in months. That's almost 100 production renders for my VN's. That's why I didn't pull the trigger on 2 2070 supers to get the NVLink bonus. I could have sold my 1080ti and 2070 and made most of the cost of the cards back but it just didn't make any sense. 

    All I am doing is making scenes with a few (3 or less) characters and a few props. I can't fit the big sets like the landscapes with lots of trees, etc., or interiors with lots of furniture, because they will blow my VRAM limit. I use every trick I can think of to reduce VRAM usage. Scene Optimiser is good but take it a step too far and the seams on the skins begin to show. I have the drop-to-CPU feature disabled so that if it goes over 8GB all I get is an instant black render. That happens far too often for my liking so I'm paranoid about adding too much to the scene and obsessive about reducing the texture sizes.

    So if you would like to share your secret, I'm sure that lots of us here would appreciate it.

    What resolution do you render at? I keep running into people who say 8Gb isn't enough who render at huge resolutions. If that's you try 1080.

    This is a render from a chapter of my current VN that I did. It fit in 8Gb without a problem.

     

     

    Here is a scene that wouldn't even CPU render until I build a new PC with 32GB RAM instead of 16GB RAM. As you see, it's a totally typical everyday average scene that isn't out of the ordinary for what DAZ 3D might expect customers to render using the models they sell.

    It was rendered at FHD (1920x1280) but got reduced in size when I uploaded it to the DAZ Galleries.

    I won't be trying to GPU render it until I get one of the GeForce RTX 30X0 GPUs. I know that iRay rendering with an nVidia GPU causes certain optimizations that don't happen with a CPU render but I don't think it's going to be able to optimize a render that takes over 24GB CPU rendering into 8 GB RAM on a GPU video card.

    No, it isn't. I thought this was well known. Those curved surface emissives are massive VRAM hogs. You can get away with one or two but you've got what a dozen?

    So are not emmisives part of iRay what customers can expect to be used in a typical DAZ product? Must be because emmissives where sold as part of that product.

    You know part of being knowledgable professional is knowing that a consumer oriented product for hobbyists is aimed at customers that don't read technical archana about emissives and such things. They buy a product and use it as is because that's what sold the product to the customer. They don't buy a car and then procede to rebuild the engine in their brand new car.

    That product was designed that way. That is why DAZ 3D and those PAs should expect that a product they design and sell winds up getting used exactly in a manner as they designed it to be used. You seem to think it's Joe Average's fault for not designing the product themselves to avoid those emmisives.

    Fact is that's a perfectly average and legitimate scene that DAZ 3D, game engine companies, 3D companies and video GPU designers and PC designers can expect average hobby consumers to construct using COTs products they are selling to us. They came to us with their advertising we didn't go to them and they said we can do those things so count on it: we customers expect that hardware, software, and digital models to be able to do those things. They need to up their game. The era of constructing 3D scenes like PC users used to load programs in 512K of RAM in DOS days are over. They need to get on the stick! laugh 

    Fact is you spend time on the forums. People talk about curved emissive surfaces being an issue here a lot, that's where I first read about it and tested it for myself. Also you had a render drop to CPU and didn't try to optimize it. Why not? Seems pretty clear to me that the only unusual thing in the scene is the emissives so they'd be the thing I'd start looking at first.

    Perhaps it is a matter of context. I have also read about emissive spheres, etc., on the forum but usually in the context of rendering speed, not VRAM. As I understand it, the sphere has lots of small polygons and each of these are emissive compared to a flat plane which can be a single polygon. So IRay has to calculate the light distribution from each of those little polygons instead of just the one. I have never thought of a couple of hundred polygons as being a huge hog of VRAM however but I am open to being educated otherwise.

    Spheres by their very natures have lots more geometry than other primitives. Now add in all that light. Each light source may be very light weight but thousands of them adds up. I also have to assume there is some optimization going on as well. If a bunch of emitters are all very close together emitting in the same direction at the same angle that's pretty easy to optimize. But when they're all emitting at slightly different angles? That means calculating each one individually. And the results of all that does consume VRAM. 

    But run the test yourself. Create a bunch of curved surfaces, Spheres and whatever else you want. Make them emissive one at a time and render the scene. Eventually it will drop to cpu. Or get nonesuch00's scene turn off the emitters and render the scene, I'm sure it will render on an 8Gb card.

  • You can also use a point light with a non-point shape, rather than an emissive sphere. I haven't tessted the esults, at least with a view to comparing speed and memory use, but I think it might be an improvement

  • marblemarble Posts: 7,500
    marble said:
    marble said:
    marble said:
    nicstt said:

    I keep telling folks to wait; although the FE 3080 is still a decent deal, but more RAM is essential in rendering.

    Depends on what you're rendering. Back when I was using a GTX 970 with 4GB it was easier to reach the limit and drop to CPU rendering. With a 1070Ti and 8GB I don't think I ever had issues unless it was some ridiculous set piece or something. The 3080 has 10GB. There's also ways to reduce render mem usage and utilities that can help as well.

     

    I just don't know how you can claim that. I often hit the 8GB limit with only 3 G8/G3 figures and after the whole scene has been run through the Scene Optimiser. It is a constant battle for me to stay within the 8GB limit and I find that very limiting indeed. I too started with a 970 but it lasted only a few weeks before I had to splash out on a 1070 which is what I'm still using. If I had the funds I would go for the new 3090 but I haven't so I'm really hoping that a 16GB 3070 is not far away.

    However, I am looking at other options like using Blender. I've been a bit disappointed that I have not been able to reproduce the IRay quality with Cycles yet but I'm a Blender novice while I've been using IRay for several years now.

    I don't know what you're doing that I'm not but I almost never hit 8 Gb. I keep track of my logs pretty closely and I haven't had a render exceed my 2070 in months. That's almost 100 production renders for my VN's. That's why I didn't pull the trigger on 2 2070 supers to get the NVLink bonus. I could have sold my 1080ti and 2070 and made most of the cost of the cards back but it just didn't make any sense. 

    All I am doing is making scenes with a few (3 or less) characters and a few props. I can't fit the big sets like the landscapes with lots of trees, etc., or interiors with lots of furniture, because they will blow my VRAM limit. I use every trick I can think of to reduce VRAM usage. Scene Optimiser is good but take it a step too far and the seams on the skins begin to show. I have the drop-to-CPU feature disabled so that if it goes over 8GB all I get is an instant black render. That happens far too often for my liking so I'm paranoid about adding too much to the scene and obsessive about reducing the texture sizes.

    So if you would like to share your secret, I'm sure that lots of us here would appreciate it.

    What resolution do you render at? I keep running into people who say 8Gb isn't enough who render at huge resolutions. If that's you try 1080.

    This is a render from a chapter of my current VN that I did. It fit in 8Gb without a problem.

     

     

    Here is a scene that wouldn't even CPU render until I build a new PC with 32GB RAM instead of 16GB RAM. As you see, it's a totally typical everyday average scene that isn't out of the ordinary for what DAZ 3D might expect customers to render using the models they sell.

    It was rendered at FHD (1920x1280) but got reduced in size when I uploaded it to the DAZ Galleries.

    I won't be trying to GPU render it until I get one of the GeForce RTX 30X0 GPUs. I know that iRay rendering with an nVidia GPU causes certain optimizations that don't happen with a CPU render but I don't think it's going to be able to optimize a render that takes over 24GB CPU rendering into 8 GB RAM on a GPU video card.

    No, it isn't. I thought this was well known. Those curved surface emissives are massive VRAM hogs. You can get away with one or two but you've got what a dozen?

    So are not emmisives part of iRay what customers can expect to be used in a typical DAZ product? Must be because emmissives where sold as part of that product.

    You know part of being knowledgable professional is knowing that a consumer oriented product for hobbyists is aimed at customers that don't read technical archana about emissives and such things. They buy a product and use it as is because that's what sold the product to the customer. They don't buy a car and then procede to rebuild the engine in their brand new car.

    That product was designed that way. That is why DAZ 3D and those PAs should expect that a product they design and sell winds up getting used exactly in a manner as they designed it to be used. You seem to think it's Joe Average's fault for not designing the product themselves to avoid those emmisives.

    Fact is that's a perfectly average and legitimate scene that DAZ 3D, game engine companies, 3D companies and video GPU designers and PC designers can expect average hobby consumers to construct using COTs products they are selling to us. They came to us with their advertising we didn't go to them and they said we can do those things so count on it: we customers expect that hardware, software, and digital models to be able to do those things. They need to up their game. The era of constructing 3D scenes like PC users used to load programs in 512K of RAM in DOS days are over. They need to get on the stick! laugh 

    Fact is you spend time on the forums. People talk about curved emissive surfaces being an issue here a lot, that's where I first read about it and tested it for myself. Also you had a render drop to CPU and didn't try to optimize it. Why not? Seems pretty clear to me that the only unusual thing in the scene is the emissives so they'd be the thing I'd start looking at first.

    Perhaps it is a matter of context. I have also read about emissive spheres, etc., on the forum but usually in the context of rendering speed, not VRAM. As I understand it, the sphere has lots of small polygons and each of these are emissive compared to a flat plane which can be a single polygon. So IRay has to calculate the light distribution from each of those little polygons instead of just the one. I have never thought of a couple of hundred polygons as being a huge hog of VRAM however but I am open to being educated otherwise.

    Spheres by their very natures have lots more geometry than other primitives. Now add in all that light. Each light source may be very light weight but thousands of them adds up. I also have to assume there is some optimization going on as well. If a bunch of emitters are all very close together emitting in the same direction at the same angle that's pretty easy to optimize. But when they're all emitting at slightly different angles? That means calculating each one individually. And the results of all that does consume VRAM. 

    But run the test yourself. Create a bunch of curved surfaces, Spheres and whatever else you want. Make them emissive one at a time and render the scene. Eventually it will drop to cpu. Or get nonesuch00's scene turn off the emitters and render the scene, I'm sure it will render on an 8Gb card.

    You do have a tendency to be condescending, don't you? All I was saying was that emissive spheres have been usually discussed in terms of render speed. You didn't have to lecture me by repeating what I had just said in your own words. I said:

     As I understand it, the sphere has lots of small polygons and each of these are emissive ...

    And you said, as though giving new information:

    Spheres by their very natures have lots more geometry than other primitives. Now add in all that light. Each light source may be very light weight but thousands of them adds up.

     

  • For once I agree with kenshaw on something, LOL. People don't have to be very tech savvy to feel screwed, and they don't have to be launch window buyers. Just seeing Nvidia release a new set of GPUs weeks after that suddenly doubled VRAM would be huge red flags for many customers. Nvidia announced the 3070, and that product isn't coming until October. Yet they could not bother to tell us that double VRAM capacities would be coming in November?

    That just doesn't work. And maybe if AMD blows the doors off Nvidia that force their hand, but again, That still would point to very poor planning. Rumors about AMD being powerful and having 16gb have existed for months. A leaked AMD benchmark dates all the way back to early January of this year. Early January! That is an eternity in tech time. And yes, these companies absolutely do watch each other very closely, books have been written about the things these companies do. Why is this a surprise?

    We have some new information with a leaked Chinese review of the 3090. It is not very spectacular. In the games they tested, it was only 10% faster than the 3080, and some games it was only 5%. If this is indeed true, it places all talk of a 3080 Super or 3080ti on hold. How on earth could a Super or ti exist if there is only 10% difference between the 3080 and 3090? The only thing they could do is add the extra VRAM and maybe make the VRAM faster, since GDDR6X can hit 21 per second. The current 3080 is rated for 19. That would add a little performance to the card, but not much. 

    But this review also casts doubt on 20gb models because the 3090 is so expensive for so little gain. A 3080 with 20gb would make the 3090 pretty much irrelevant since it is so close in performance. It could be possible to overclock a 3080 to hit 3090 performance. The 3090 would need Titan features to justify its price over a 20gb 3080. But right now, it doesn't look like it has any Titan features. The only feature over the 3080 it has is SLI Nvlink.

    So now releasing a 20gb 3080 so soon would really sour those who bought a 3090, not just 3080 buyers.

    Obviously the tech industry is always moving forward at a rapid rate. But this would be too fast and burn a whole of people. The effect would hit catch up to them. That is why these extra VRAM models launch 2021. That gives them time.

    Jenson Huang made a pretty strong statement during his Ampere announcement. He said rather emphatically "To all my Pascal friends, it's safe to upgrade now". What curious wording he chose. He was pitching Ampere not just to those who bought Turing, but to those who stuck to Pascal. This was a very telling statement, because Pascal sold a lot better than Turing. Various Pascal cards dominate the Steam survey. Nvidia really wants to get these people to upgrade. But this group of people is choosey. They skipped Turing because they didn't like the value. So Huang was trying to make his pitch to them. Now just imagine after telling Pascal owners it is "safe to upgrade" that they double VRAM 8 weeks later. People who have had their cards for as long as 4 years were specifically told it was safe to upgrade.

    Boy that wouldn't go over so well. This statement would get mocked. It might even become immortalized...as a meme.

    That would be bad for business.

    Interesting. Maybe the Supers and Tis will not support NVLink? Honestly, that's why I want it so badly: I see it as having 48 gigs, not 24...

  • Only the 3090 of the current cards has NVLink. I'd be pretty surprised if any of the Super cards have it.

  • outrider42outrider42 Posts: 3,679

    It looks like Nvidia is trying to kill off Nvlink and SLI, at least in the consumer space. The 3080 and the 3090 use the exact same chip, so that Nvidia took the extra step of designing a board just for the 3080 that has no Nvlink connection says a lot. They could have just used the same board with different memory configs and controllers, but they didn't want to.

    Clearly this is to keep segmentation from Quadro series, which always have this feature. As VRAM counts rise in gaming cards, Nvidia doesn't want to undercut Quadro that easily. So if you want 48gb of VRAM you can only get Quadro or 3090, or Titan RTX.

  • It looks like Nvidia is trying to kill off Nvlink and SLI, at least in the consumer space. The 3080 and the 3090 use the exact same chip, so that Nvidia took the extra step of designing a board just for the 3080 that has no Nvlink connection says a lot. They could have just used the same board with different memory configs and controllers, but they didn't want to.

    Clearly this is to keep segmentation from Quadro series, which always have this feature. As VRAM counts rise in gaming cards, Nvidia doesn't want to undercut Quadro that easily. So if you want 48gb of VRAM you can only get Quadro or 3090, or Titan RTX.

    SLI steadily lost support in games over the last five or so years and even before then support was far from complete even in AAA titles. It was a lot of money to spend for it to only work in a third of the games you owned, if you were lucky.

    With almost every game running at acceptable fps on mid tier HW there just isn't much demand for SLI anymore. NVLink really is a pro level thing and it makes some sense it is only on the pro and prosumer cards. It will suck for the DS users though. 

  • fred9803fred9803 Posts: 1,564
     

    With almost every game running at acceptable fps on mid tier HW there just isn't much demand for SLI anymore. NVLink really is a pro level thing and it makes some sense it is only on the pro and prosumer cards. It will suck for the DS users though. 

    The rumoured "supers" likely (maybe) to hit the market next year with more VRAM might make multi-card memory pooling not so important, given that it's more efficient (and far less expensive) than sharing memory between two high-end cards. If the have (again rumoured) 20GB of VRAM wouldn't that be anough for virtually all DS users?

  • Linus tested the 3080 with the updated Crysis (beta, remastered version).  It's still struggling a bit on the FPS...

    Of course, until there's an official release version of the new Crysis, we have no way of knowing what driver improvements may bring, but it's nice to know that there's still a game out there that's going to give the latest cards a run for their money.

    As to whether SLI would help in this situation if it was available, I have no idea...

  • joseftjoseft Posts: 310

    Linus tested the 3080 with the updated Crysis (beta, remastered version).  It's still struggling a bit on the FPS...

    Of course, until there's an official release version of the new Crysis, we have no way of knowing what driver improvements may bring, but it's nice to know that there's still a game out there that's going to give the latest cards a run for their money.

    As to whether SLI would help in this situation if it was available, I have no idea...

    I wouldnt read too much into the 3080 struggling on crysis remastered. The highest setting in that game is literally just for trolling hardware, like the original game was. Basically, that setting goes well beyond what the highest settings in any other game are. There are a couple of articles floating around that explain the idea behind that.

    for perspective, on my 2080ti, the second highest settings gives generally triple the framerate as enabling that highest one does. 

  • nicsttnicstt Posts: 11,715

    Only the 3090 of the current cards has NVLink. I'd be pretty surprised if any of the Super cards have it.

    I would agree.

    It would be an additional slap in the face to those who bought a normal card.

  • joseft said:

    Linus tested the 3080 with the updated Crysis (beta, remastered version).  It's still struggling a bit on the FPS...

    Of course, until there's an official release version of the new Crysis, we have no way of knowing what driver improvements may bring, but it's nice to know that there's still a game out there that's going to give the latest cards a run for their money.

    As to whether SLI would help in this situation if it was available, I have no idea...

    I wouldnt read too much into the 3080 struggling on crysis remastered. The highest setting in that game is literally just for trolling hardware, like the original game was. Basically, that setting goes well beyond what the highest settings in any other game are. There are a couple of articles floating around that explain the idea behind that.

    for perspective, on my 2080ti, the second highest settings gives generally triple the framerate as enabling that highest one does. 

    The setting is literally called "Can it run Crysis" I got 12 fps with my 2070 at 4k which was pretty funny as that's about what I got with my 8800GT's in SLI back in 2008 on the original Crysis. Turned down it is very playable and looks amazing.

     

  • fred9803 said:
     

    With almost every game running at acceptable fps on mid tier HW there just isn't much demand for SLI anymore. NVLink really is a pro level thing and it makes some sense it is only on the pro and prosumer cards. It will suck for the DS users though. 

    The rumoured "supers" likely (maybe) to hit the market next year with more VRAM might make multi-card memory pooling not so important, given that it's more efficient (and far less expensive) than sharing memory between two high-end cards. If the have (again rumoured) 20GB of VRAM wouldn't that be anough for virtually all DS users?

    The thing is VRAM isn't the issue for any current game, except maybe Flight Sim 2020. So Nvidia will be marketing these cards to creators and the clueless. Creators who don't have the budget for Quadros is a real market no doubt but how big is it? Keep in mind Nvidia just spent a fair bit of time telling consumers 10Gb is all they need. So if next spring they start saying here's a 16 and 20Gb card they'll have some questions to answer. If they say the cards aren't intended for gamers the AIB's will lose their shit. The AIB's whole marketing is aimed at gamers. So I'm just not sure how these cards fit into the product stack or get marketed. I accept Nvidia has created them and intends to release them at some point, maybe, but they just don't make sense.

  • nicsttnicstt Posts: 11,715

    Haha

    I actually saw a Buy Now button on Nvidia's site; it lasted less than a second

     

  • nicsttnicstt Posts: 11,715
    edited September 2020

    248 viewing, went up to 265 and then down to 248 and occasionally lower.

    There are 5 sites available via Nvidia's, and the price of the card (the same card) ranges from 649.99 to 799.99; all are out of stock.

    3080.jpg
    1145 x 431 - 99K
    Post edited by nicstt on
  • AalaAala Posts: 140

    Just ordered two MSI RTX 3090 Ventus cards, will arrive on October 20th. Here's hoping I can fit them on my PC, they're apparently 2.7 slots compared to my current 2.2 slot 2080 Ti's. And I also hope DAZ has an update by then that can utilize the 3xxx series. Will post benchmarks on the other thread when I do.

  • TheKDTheKD Posts: 2,703
    Aala said:

    Just ordered two MSI RTX 3090 Ventus cards, will arrive on October 20th. Here's hoping I can fit them on my PC, they're apparently 2.7 slots compared to my current 2.2 slot 2080 Ti's. And I also hope DAZ has an update by then that can utilize the 3xxx series. Will post benchmarks on the other thread when I do.

    Care to donate one of your 2080ti's to a poor mpoverished Thekd?   :P

  • AalaAala Posts: 140
    TheKD said:
    Aala said:

    Just ordered two MSI RTX 3090 Ventus cards, will arrive on October 20th. Here's hoping I can fit them on my PC, they're apparently 2.7 slots compared to my current 2.2 slot 2080 Ti's. And I also hope DAZ has an update by then that can utilize the 3xxx series. Will post benchmarks on the other thread when I do.

    Care to donate one of your 2080ti's to a poor mpoverished Thekd?   :P

    Heh, I do wonder how much money and time it would cost to ship them from Eastern Europe.

    I do plan on donating one to my little newphew though. Trying to get him to learn Blender, and it'll help him massively over his 1060.

  • TheKDTheKD Posts: 2,703

    Ah blender, good cause to donate to. Good luck to your nephew lol

  • kyoto kidkyoto kid Posts: 41,257

    ...anyone who replaces a 16 GB RTX Quadro 5000 with a 3090 I'd be glad to take the old card off your hands. ;-)

  • nicsttnicstt Posts: 11,715

    I'd actually considered it, then decided to go for a Titan RTX; the 3090 is almost a steal at a 1000 less - almost!

  • Aala said:

    Just ordered two MSI RTX 3090 Ventus cards, will arrive on October 20th. Here's hoping I can fit them on my PC, they're apparently 2.7 slots compared to my current 2.2 slot 2080 Ti's. And I also hope DAZ has an update by then that can utilize the 3xxx series. Will post benchmarks on the other thread when I do.

    thats 700 watts just for the GPUs, that will be like running a space heater under your desk. Its bad enough for me here in the South with one lousy RTX 2060 cranking in the bedroom for an all night render. And that's with central air at 73 degrees. I would have to have a separate wall unit air conditioner in the bedroom if I had 2 x 3090s.

Sign In or Register to comment.