Nvidia Ampere (2080 Ti, etc. replacements) and other rumors...

1293032343545

Comments

  • SevrinSevrin Posts: 6,310
    Diaspora said:

    Thinking about it some more, even if someone had 1,500 to burn and 10gb was enough vram, considering the lackluster improvement in performance between the 3080 and 3090 in context of the doubling of price, is there any reason someone shouldn't just get two rtx 3080's? 

    It would cost the same amount as a 3090 but ensure way more iray iterations total and a more iterations per dollar.

    I'm suspecting that Nvidia got rid of the 3080's nvlink feature because then they'd probably really struggle to move 3090's. 

    If they sold two 3080s plus a link, they'd gross more than if they sold one 3090 which requires less power than two 3080s combined, and there's no sign of a lack of demand, so I'm not sure how good a strategy that would be.

  • outrider42outrider42 Posts: 3,679

    Iray usually falls in line with other renderers in terms of comparing performance. In those the 3090 is 20% faster than the 3080. That really is about as much faster as it can possibly be, since the 3090 has 20% more transistors. There is no way it can go further without serious overclocking, which would consume a lot of power.

    So there are things to consider. If 10gb is enough for you, then sure, a 3080 would be fine. But that is assuming things stay the same. Just a few years ago Daz content generally used less data, too. So it stands to reason that future will continue to increase in size over time.

    Another consideration is power draw. I am sure most of us can handle a 3090 here. But going with two 3080's is going to be using over 640 Watts of power, and that is only the GPUs, and that does not cover peak power draw. You will need a serious power supply to handle that. And you will need a seriously good cooling solution, that kind of Wattage is in small space heater territory. I do not think air cooling is going to work with two 3080s in most situations. And even if it does work very well at ejecting heat, well that heat is going into your room, LOL. Even this can cause problems. After all, an increase in ambient room temperature will cause the PC temps to rise as well. So you could end up in a situation where the warmer room causes the GPUs to run even hotter than otherwise. You need to not just think about the PC cooling, but how will you cool the work space around it?

    So there are more things to consider here. A 3080 + 2080 combo still would be a lot of power and heat. A big question is going to be how well does the 3080 and 3090 heatsink design work in multiGPU setups. I have not seen a test for this.

  • Sevrin said:
    Diaspora said:

     

     one 3090 which requires less power than two 3080s combined, 

    And delivers much much less performance per dollar. 

  • SevrinSevrin Posts: 6,310
    Diaspora said:
    Sevrin said:
    Diaspora said:

     

     one 3090 which requires less power than two 3080s combined, 

    And delivers much much less performance per dollar. 

    Where electricity is free.

  • Iray usually falls in line with other renderers in terms of comparing performance. In those the 3090 is 20% faster than the 3080. That really is about as much faster as it can possibly be, since the 3090 has 20% more transistors.

    That is faulty logic in a couple of ways.

    First, digital logic does not scale linearly, i.e. if you wanted, say, a 64 adder instead of a 32 bit adder, it would cost you more than twice the number of gates in order to be twice as fast.

    Second, the more advanced technology has a lot more potential for speedup than an incremental speedup in the switching speed of the transistors. A totally contrived but illustrative example would be inserting a new integer into a sorted array of integers. A naive implementation would start at the first element, and keep progressing to the next until that integer is larger than the one I'm trying to insert. It would work, but it would be on average 100,000 times slower if I had 1,000,000 integers in the array as opposed to only 10. But if I use a Red-Black tree structure instead of an array, inserting into a tree of 1,000,000 would only be about 5 times slower than it would be with 10. Algorithms are much more important than transistors. No programmer would ever do it the first way, but by mutatis mutandis we could replace them both with "Turing" and "Ampere" and the argument would remain valid.

  • nonesuch00nonesuch00 Posts: 18,320

    I think the price is still excessive for consumer oriented cards even though it's been massively improved over the recent past. I'll be getting a 3060, 3070, or the 8GB 3080 (3080 only if on steep discount in the future). The 3D and 3D gaming hobbies are out of the bag now and these HW & SW businesses have to deliver really great improvements in those capabilities at consumer prices to those hobbyists or face folk sticking with their current HW & SW. It's not even a difficult choice not to buy if it's not delivering all you need at those kind of prices.

  • RayDAntRayDAnt Posts: 1,147

    Iray usually falls in line with other renderers in terms of comparing performance. In those the 3090 is 20% faster than the 3080. That really is about as much faster as it can possibly be, since the 3090 has 20% more transistors.

    That is faulty logic in a couple of ways.

    First, digital logic does not scale linearly, i.e. if you wanted, say, a 64 adder instead of a 32 bit adder, it would cost you more than twice the number of gates in order to be twice as fast.

    Second, the more advanced technology has a lot more potential for speedup than an incremental speedup in the switching speed of the transistors. A totally contrived but illustrative example would be inserting a new integer into a sorted array of integers. A naive implementation would start at the first element, and keep progressing to the next until that integer is larger than the one I'm trying to insert. It would work, but it would be on average 100,000 times slower if I had 1,000,000 integers in the array as opposed to only 10. But if I use a Red-Black tree structure instead of an array, inserting into a tree of 1,000,000 would only be about 5 times slower than it would be with 10. Algorithms are much more important than transistors. No programmer would ever do it the first way, but by mutatis mutandis we could replace them both with "Turing" and "Ampere" and the argument would remain valid.

    The 3080 and 3090 are both built around the same GPU die - just with different numbers of SMs (the underlying independently functioning building block of all modern Nvidia GPU architectures) fully operational and enabled at the hardware level. All 3080's ship with 68 functional SMs. All 3090's ship with 82 functional SMs (approximately +20%.) Therefore the 3090 will be approximately 20% faster at Iray rendering than the 3080 with almost absolute certainty.

  • PerttiAPerttiA Posts: 10,024
    Diaspora said:

    Thinking about it some more, even if someone had 1,500 to burn and 10gb was enough vram, considering the lackluster improvement in performance between the 3080 and 3090 in context of the doubling of price, is there any reason someone shouldn't just get two rtx 3080's

    It would cost the same amount as a 3090 but ensure way more iray iterations total and a more iterations per dollar.

    No NVlink = Max VRAM is still 10GB, which is enough for most at this time, but so was 4GB:s enough 4 years ago. The trend is clear and growing exponentially.

  • nicsttnicstt Posts: 11,715
    Diaspora said:

    More test results

    My original plan was that I was going to sell my RTX 2080 and just run an RTX 3090.

    I think a better value for me now is that I KEEP my RTX 2080, add an RTX 3080 and a high wattage power supply that can run both those cards and everything else, and I will end up coming out far ahead of just a solitary RTX 3090. 

    I say this as someone who avoids making scenes that require a lot of memory, so the 24 gigabytes isn't enough reason to pay the best of the best premium.

    Gonna wait though for independently conducted iRAY benchmarks, maybe the 3090 will really excel in that. 

    If your scene doesn't fit on the card, then you have two expensive paperweights.

  • SevrinSevrin Posts: 6,310

    I think the price is still excessive for consumer oriented cards even though it's been massively improved over the recent past. I'll be getting a 3060, 3070, or the 8GB 3080 (3080 only if on steep discount in the future). The 3D and 3D gaming hobbies are out of the bag now and these HW & SW businesses have to deliver really great improvements in those capabilities at consumer prices to those hobbyists or face folk sticking with their current HW & SW. It's not even a difficult choice not to buy if it's not delivering all you need at those kind of prices.

    Well, they're more "prosumer"-oriented.  Nobody who does a lot of rendering is an "average" computer consumer.  They might have average money, but that's a different story.  The "average" consumer does not play demanding games on a PC, they play games on their phone or console.  A few million Red Dead Redemption or whatever players don't skew the average.  They will buy a much lower end card, likely by AMD, or be happy with their integrated graphics.  

    As for the price, that's like saying a Porsche is overpriced for something you just drive a few blocks to buy milk in.  If that's all you need a car for, you simply shouldn't buy a Porsche.

  • nicsttnicstt Posts: 11,715
    Diaspora said:

    More test results

    My original plan was that I was going to sell my RTX 2080 and just run an RTX 3090.

    I think a better value for me now is that I KEEP my RTX 2080, add an RTX 3080 and a high wattage power supply that can run both those cards and everything else, and I will end up coming out far ahead of just a solitary RTX 3090. 

    I say this as someone who avoids making scenes that require a lot of memory, so the 24 gigabytes isn't enough reason to pay the best of the best premium.

    Gonna wait though for independently conducted iRAY benchmarks, maybe the 3090 will really excel in that. 

    It seems to be 15-20% better than the 3080; I use Blender, which is what it was there, and after looking at the most recent vid posted, the results in that were largely comparable.

     

  • RayDAnt said:

    The 3080 and 3090 are both built around the same GPU die - just with different numbers of SMs (the underlying independently functioning building block of all modern Nvidia GPU architectures) fully operational and enabled at the hardware level. All 3080's ship with 68 functional SMs. All 3090's ship with 82 functional SMs (approximately +20%.) Therefore the 3090 will be approximately 20% faster at Iray rendering than the 3080 with almost absolute certainty.

    I stand corrected. Apologies to @outrider42 I was thinking 2080.

  • nicsttnicstt Posts: 11,715
    Diaspora said:

    Thinking about it some more, even if someone had 1,500 to burn and 10gb was enough vram, considering the lackluster improvement in performance between the 3080 and 3090 in context of the doubling of price, is there any reason someone shouldn't just get two rtx 3080's? 

    It would cost the same amount as a 3090 but ensure way more iray iterations total and a more iterations per dollar.

    I'm suspecting that Nvidia got rid of the 3080's nvlink feature because then they'd probably really struggle to move 3090's. 

    15% ish more is well worth having, but not at the cost; one is paying $80 per percent; if the buffer also factors in then it's a different consideration.

  • SevrinSevrin Posts: 6,310
    edited September 2020
    nicstt said:
    Diaspora said:

    Thinking about it some more, even if someone had 1,500 to burn and 10gb was enough vram, considering the lackluster improvement in performance between the 3080 and 3090 in context of the doubling of price, is there any reason someone shouldn't just get two rtx 3080's? 

    It would cost the same amount as a 3090 but ensure way more iray iterations total and a more iterations per dollar.

    I'm suspecting that Nvidia got rid of the 3080's nvlink feature because then they'd probably really struggle to move 3090's. 

    15% ish more is well worth having, but not at the cost; one is paying $80 per percent; if the buffer also factors in then it's a different consideration.

    You don't buy the 3090 for the speed increase, but for the memory.

    Post edited by Sevrin on
  • nicsttnicstt Posts: 11,715
    Sevrin said:
    nicstt said:
    Diaspora said:

    Thinking about it some more, even if someone had 1,500 to burn and 10gb was enough vram, considering the lackluster improvement in performance between the 3080 and 3090 in context of the doubling of price, is there any reason someone shouldn't just get two rtx 3080's? 

    It would cost the same amount as a 3090 but ensure way more iray iterations total and a more iterations per dollar.

    I'm suspecting that Nvidia got rid of the 3080's nvlink feature because then they'd probably really struggle to move 3090's. 

    15% ish more is well worth having, but not at the cost; one is paying $80 per percent; if the buffer also factors in then it's a different consideration.

    You don't buy the 3090 for the speed increase, but for the memory.

    Well, I'm presuming you read all my statement, which ends by stating what you said.

  • bluejauntebluejaunte Posts: 1,923

    I'm thinking 3080 20GB would be best depending on cost.

  • Sorry for barging in the thread without reading the previous posts, but 32 pages is quite a long read.

    20GB 3080 has been confirmed by GALAX and Gigabyte. Any info on pricing/release date yet?

    Generally, I think getting 2x 3080 with 20 G's of VRAM instead of 3090 is gonna be a killer deal (even without sli/nvlink). A whopping 19,000 CUDA cores in total.

  • PerttiAPerttiA Posts: 10,024

    Sorry for barging in the thread without reading the previous posts, but 32 pages is quite a long read.

    20GB 3080 has been confirmed by GALAX and Gigabyte. Any info on pricing/release date yet?

    Generally, I think getting 2x 3080 with 20 G's of VRAM instead of 3090 is gonna be a killer deal (even without sli/nvlink). A whopping 19,000 CUDA cores in total.

    Without NVLink it's not 20GB's VRAM but just 10GB's

  • bluejauntebluejaunte Posts: 1,923

    We're talking about the 20GB variant of the 3080 that has shown up in some slides. But we don't know how much it will cost, when it will come out or even if at all yet.

  • SevrinSevrin Posts: 6,310

    We're talking about the 20GB variant of the 3080 that has shown up in some slides. But we don't know how much it will cost, when it will come out or even if at all yet.

    Yeah, it seems weird.  I can't imagine that it will come out as just another "3080" without some kind of "ti" or "Super" affix.  There was talk, based, I believe, on nothing, that Nvidia had abandoned their old suffixing ways, but it looks like that was premature.  They should have a contest asking what the mutant baby's name will be.

  • GordigGordig Posts: 10,191
    Sevrin said:

    We're talking about the 20GB variant of the 3080 that has shown up in some slides. But we don't know how much it will cost, when it will come out or even if at all yet.

    Yeah, it seems weird.  I can't imagine that it will come out as just another "3080" without some kind of "ti" or "Super" affix.  There was talk, based, I believe, on nothing, that Nvidia had abandoned their old suffixing ways, but it looks like that was premature.  They should have a contest asking what the mutant baby's name will be.

    3080fu

  • nicsttnicstt Posts: 11,715
    Gordig said:
    Sevrin said:

    We're talking about the 20GB variant of the 3080 that has shown up in some slides. But we don't know how much it will cost, when it will come out or even if at all yet.

    Yeah, it seems weird.  I can't imagine that it will come out as just another "3080" without some kind of "ti" or "Super" affix.  There was talk, based, I believe, on nothing, that Nvidia had abandoned their old suffixing ways, but it looks like that was premature.  They should have a contest asking what the mutant baby's name will be.

    3080fu

    3080 FBFGPU

  • nicsttnicstt Posts: 11,715

    I'm thinking a 3090 and 3080 20GB would be a good match. The 3090 is the display card, which means more RAM would be used for the displays, leaving perhaps about the same amount as the 3080. This way, it would be rare for the 3080 to be dropped due to lack of RAM.

    Anyone who thinks that isn't going to happen eventually, well I have some really nice land for sale.

  • We're talking about the 20GB variant of the 3080 that has shown up in some slides. But we don't know how much it will cost, when it will come out or even if at all yet.

    This here... all we can hope for is that the price of the 20 GB variant isn't too expensive.  If not I'll be thinking about two of those bad boys.

    If it's close to the 3090 price, then probably just go with a single 3090.

  • Considering the present food fight between Nvidia and the AIB's over this whole drivers/power delivery design thing I think Nvidia has to be having a bit of a come to Jesus moment with their attorneys. So they are probably pushing any releases beyond the 3070 back some.

    To fully explain, Nvidia did not provide a fully functional beta driver along with the reference PCB's but just a driver that would run Furmark, Furmark is a benchmark program that is essentially a power virus. It fully loads a GPU but that keeps it from boosting very high. Ampere draws a lot of power, way more than Nvidia admits to, during boost. The AIB's, lacking a driver that would let them test this for themselves and being told to expect a base cost below what they wanted cut costs on power delivery right down to what Nvidia said the cards needed. The 3080's got out in the world and started crashing because the boost power draw wasn't what Nvidia said it was. So now there is finger pointing all around. Nvidia is putting out a driver that somehow adjusts the boost behavior to correct the worst of this for the EVGA cards but you can be sure the AIB's are facing lawsuits and since the AIB's have a legitimate gripe about not getting correct info from Nvidia...

    In short I would not expect more cards any time soon. The AIB's have to redesign the cards they have on the market and the ones they have in the pipeline and Nvidia has to change how they do these things so they don't get sued by their own partners.

  • RayDAntRayDAnt Posts: 1,147

    So assuming for a moment that rumors/past expectations are true and Nvidia continues to fill out their mid to high-end card assortment this generation, this is how I see the various offerings working out in terms of actual value to a Daz Studio/Iray or similar 3D application user:

    Graphics Card Pros Cons
    RTX 3070 8GB price limited VRAM,
    performance,
    no nvlink
    RTX 3070 16GB (rumored) VRAM performance
    RTX 3080 10GB price,
    performance

    limited VRAM,
    no nvlink

    RTX 3080 20GB (rumored)

    VRAM,
    performance

     
    RTX 3090 24GB

    VRAM,
    nvlink support

    price,
    limited driver support,
    Titan RTX 24GB+ with Ampere (theoretical) VRAM,
    driver support
    nvlink support
    price

    Assuming that the expanded VRAM version cards MSRPs are below the price of the baseline next higher-tier card (because I don't see how it could be done any other way) this pretty much encapsulatesis why I see the 3090 as being such a bad buy for someone with a 3D application use-case. If more VRAM is all you need then the 3080 20GB is a no-brainer. If advanced functionality in creative apps is your game, then the Titan will be the logical choice. The 3090 just sits in this "no man's land" of added expense with limited functionality that not even the 10GB 3080 or 8GB 3070 fall into (since they at least have the argument of relative affordability on their side.)

    Come to think of it, making this table sets up an overall market segmentation pattern that I could very well see being Nvidia's longterm plan for all of Ampere. Basically what they're doing is following their usual xx70/xx80/etc. mid/high/highest tier GPUs and then doing a gamer-focused and creator-focused variation of each one. For a nicely symmetrical overall lineup that looking something like this:

    Performance Tier Gamer Variation Creator Variation
    Middle RTX 3070 RTX 3070 16GB
    High RTX 3080 RTX 3080 20GB
    Highest RTX 3090 Titan RTX (Ampere)

    The main problem with this theory, of course, is the fact that the 3090 was very purposefully marketed as a Creator-focused card. Perhaps the already released 3090 is actually the Creator-variation of this generation's "highest" tier GPU (meaning that there is a Gamer-variation 12GB 3090 already in the works) and the Titan for this generation is being seen as a true one-off. The sole member of a 3rd Developer-variation lineup perhaps (still separate from the fullblown Professional variation that would be Quadro.) In that case, the overall lineup could be:

    Performance Tier Gamer Variation Creator Variation Developer Variation Professional Variation
    Middle RTX 3070 RTX 3070 16GB - RTX 5000 (Ampere)
    High RTX 3080 RTX 3080 20GB - RTX 6000 (Ampere)
    Highest RTX 3090 12GB RTX 3090 Titan RTX (Ampere) RTX 8000 (Ampere)
  • nicsttnicstt Posts: 11,715
    RayDAnt said:

    So assuming for a moment that rumors/past expectations are true and Nvidia continues to fill out their mid to high-end card assortment this generation, this is how I see the various offerings working out in terms of actual value to a Daz Studio/Iray or similar 3D application user:

    Graphics Card Pros Cons
    RTX 3070 8GB price limited VRAM,
    performance,
    no nvlink
    RTX 3070 16GB (rumored) VRAM performance
    RTX 3080 10GB price,
    performance

    limited VRAM,
    no nvlink

    RTX 3080 20GB (rumored)

    VRAM,
    performance

     
    RTX 3090 24GB

    VRAM,
    nvlink support

    price,
    limited driver support,
    Titan RTX 24GB+ with Ampere (theoretical) VRAM,
    driver support
    nvlink support
    price

    Assuming that the expanded VRAM version cards MSRPs are below the price of the baseline next higher-tier card (because I don't see how it could be done any other way) this pretty much encapsulatesis why I see the 3090 as being such a bad buy for someone with a 3D application use-case. If more VRAM is all you need then the 3080 20GB is a no-brainer. If advanced functionality in creative apps is your game, then the Titan will be the logical choice. The 3090 just sits in this "no man's land" of added expense with limited functionality that not even the 10GB 3080 or 8GB 3070 fall into (since they at least have the argument of relative affordability on their side.)

    Come to think of it, making this table sets up an overall market segmentation pattern that I could very well see being Nvidia's longterm plan for all of Ampere. Basically what they're doing is following their usual xx70/xx80/etc. mid/high/highest tier GPUs and then doing a gamer-focused and creator-focused variation of each one. For a nicely symmetrical overall lineup that looking something like this:

    Performance Tier Gamer Variation Creator Variation
    Middle RTX 3070 RTX 3070 16GB
    High RTX 3080 RTX 3080 20GB
    Highest RTX 3090 Titan RTX (Ampere)

    The main problem with this theory, of course, is the fact that the 3090 was very purposefully marketed as a Creator-focused card. Perhaps the already released 3090 is actually the Creator-variation of this generation's "highest" tier GPU (meaning that there is a Gamer-variation 12GB 3090 already in the works) and the Titan for this generation is being seen as a true one-off. The sole member of a 3rd Developer-variation lineup perhaps (still separate from the fullblown Professional variation that would be Quadro.) In that case, the overall lineup could be:

    Performance Tier Gamer Variation Creator Variation Developer Variation Professional Variation
    Middle RTX 3070 RTX 3070 16GB - RTX 5000 (Ampere)
    High RTX 3080 RTX 3080 20GB - RTX 6000 (Ampere)
    Highest RTX 3090 12GB RTX 3090 Titan RTX (Ampere) RTX 8000 (Ampere)

    ... And yet, the 3090 is missing the software that truly lets it be used as a creator focused card.

    It was interesting to see Linus (LinusTechTips) criticise Nvidia for this.

  • outrider42outrider42 Posts: 3,679

    The issue with a 3080 20gb is that it would almost completely nullify the existence of a 3090, unless the price is much higher than the regular 3080. I've pretty much said that before. It would to cost over $1000 to make any sense (and yes I know, that sounds a bit wacky,) with the 3090 around. The 3090 is not a Titan in any form, it simply has a lot of VRAM. So the $1500 price tag just doesn't work here. If the 3090 had more of a performance gap over the 3080, then I would be more accepting of the idea. But at just 10% for gaming (which is what most buyers will be looking at,) the gap just is not there. You could almost overclock a good 3080 to this level of performance (assuming they fix the issues.)

    If they launched a 20gb 3080 any time soon it would be a PR disaster like I already described. Just about no matter what AMD does, Nvidia will stand firm and keep the 20gb for 2021. Though it does stand to reason that if AMD hits them extremely hard, and with some Nvidia fans unhappy about the 3080 problems, then maybe Nvidia would have little choice but to toss the 20gb 3080 out in order to appease them. It would still throw all current 3080 buyers under the bus. But like I said AMD would have to really bring it hard, they have to beat the 3080 in both price and performance. Maybe they can do it, but that remains to be seen.

    Otherwise, the 20gb 3080 is far off in the distance. I am talking second quarter of 2021 at the earliest. The 1080ti launched in March of 2017, 6 months after the 1080. So that is my metric from Nvidia's past history. It also gives them some time to perhaps get better yields from Samsung that perform a little better. 

    Something caught my eye. There is a new Ryzen 5000 series benchmark out in the wild showing it beating a 10900k by a healthy margin in a game. This sounds wonderful, and indeed it is! But this is the issue I have with it: Where are the leaked benches of RDNA2? In fact, the GPU used in this particular leak is a RTX 2080. WHAT??? So we already have Ryzen 5000 leaks but nothing on RDNA2. The only RDNA2 leaked benches are nearly a year old, so who knows what they were doing back then. That GPU could have been drawing 500 Watts on LN2 for all we know, done just to post a wild attention getting benchmark. The lack of any similar benches since really bothers me. We all need AMD to compete here, because that directly benefits us.

  • SevrinSevrin Posts: 6,310

    Wouldn't it make more sense to hold off on a bigger 3080 until 8k gaming monitors were more widely available?  The 3090 seems intended for content creators and bleeding edge gamers, but there will be increased demand for high VRAM gaming cards once the new monitors become available.

  • NylonGirlNylonGirl Posts: 1,939
    Sevrin said:

    Wouldn't it make more sense to hold off on a bigger 3080 until 8k gaming monitors were more widely available?  The 3090 seems intended for content creators and bleeding edge gamers, but there will be increased demand for high VRAM gaming cards once the new monitors become available.

    I would think, if the monitors cost $60,000 now, then it would be years before they become mainstream.

Sign In or Register to comment.