Nvidia Ampere (2080 Ti, etc. replacements) and other rumors...

1235745

Comments

  • nonesuch00nonesuch00 Posts: 18,320
    kyoto kid said:
    nicstt said:

    980ti iirc it was a hybrid render, using both. I don't know if it can be less demanding in terms of vram as an uncompressed texture should be the same anywhere? Maybe because Cycles does out of core rendering, there are reduced chances of issues. I have run out of ram on the 980ti, but closing Blender and restarting is considerably faster.

    I'll do one again and post results in Blender thread and reference here; It'll might take me a while to set up.

    ...I thought though that Cycles was a non GPU based render engine.

    No, there is a choice between CPU and GPU Compute as they call it.

  • nicsttnicstt Posts: 11,715
    edited August 2020
    kyoto kid said:
    nicstt said:

    980ti iirc it was a hybrid render, using both. I don't know if it can be less demanding in terms of vram as an uncompressed texture should be the same anywhere? Maybe because Cycles does out of core rendering, there are reduced chances of issues. I have run out of ram on the 980ti, but closing Blender and restarting is considerably faster.

    I'll do one again and post results in Blender thread and reference here; It'll might take me a while to set up.

    ...I thought though that Cycles was a non GPU based render engine.

    GPU and/or CPU

    It has RTX for the rediculously priced RTX cards, which actually does perform very well - and then there is E-Cycles too that is specially designed to take advantage of Nvidia RTX cards - so I understand.

    Note that GPU means Nvidia AND AMD cards

    Post edited by nicstt on
  • nicsttnicstt Posts: 11,715
    kyoto kid said:

    ...OK, thanks.  Guess it's time to try out that Blender/Daz bridge. 

    Diffeomorphic is a far better solution imo; pretty much bug free - at 1.5 beta isn't giving any issues as I use it. YMMV.

  • https://mobile.twitter.com/kopite7kimi/status/1295520974796816384

    Looks like meat's back on the menu boys!

    However, 3090 is shaping up to look like a cutdown titan instead of a 2080ti replacement.

  • nicsttnicstt Posts: 11,715
    volpler11 said:

    https://mobile.twitter.com/kopite7kimi/status/1295520974796816384

    Looks like meat's back on the menu boys!

    However, 3090 is shaping up to look like a cutdown titan instead of a 2080ti replacement.

    The xx80ti is already a cutdown titan, and other than RAM, not much of a cutdown; a major difference is it is treated as a compute card

  • TheKDTheKD Posts: 2,703
    edited August 2020
    marble said:
     

    2. HD level 3 but removed all normal maps - GPU-Z: 4300MB

    3. HD Level 4 with normal maps still removed - GPU-Z: 5200MB.

    So this tells me that HD level makes a considerable difference to VRAm usage. Indeed, more so than removing the 4K normal maps.

    [EDIT] I loaded the base Marilla (i.e. no geo-grafts) and the difference was negligable as the geo-grafts seem to take up only about 60MB.

    I did a test for g8f, g8m should be close to the same numbers I assume. At lvl 3 subdivision, g8f has ~1,047,552 polygons. At lvl 4 it's ~4,190,208 polygons. So it makes a bit of sense that going up to lvl 4 is going to have a pretty big jump in VRAM. Up to lvl 5 would be ~16,760,832 lol.

    It would be cool if we had more control over subd, like keep all the areas that were covered by clothes at subd1, areas like the shin and forarms that are exposed but don't have any details that need a lot of polygons at subd2, but the face that needs to show a wicked scar bump it up to subd5. Like a subd weightmap or something.

    Post edited by TheKD on
  • nonesuch00nonesuch00 Posts: 18,320
    marble said:
    marble said:
    marble said:
    Drip said:

    ... and the most important bit: how much VRAM will it have. 8GB is quite decent, and is enough for most of my needs. But, an increase to, say, 12 GB or more could make the 3070 more future proof, as assets seem to get more complex geometry again and texture sizes are slowly getting bigger as well. It's just a matter of time before someone includes 4k nail textures on a model, without considering customers with slower rigs, or the fact that in 99.9% of the renders, one wouldn't see the difference between nails with 256 vs 4k textures. Current designers are generally concious about this, or made it a second nature to optimize textures. I mainly worry about new artists who never had to worry about memory limitations or had no need to think logically about what's necessary.

    I just don't understand this claim at all. I have a 1070 with 8GB and I hit that limit very easily with just three G3/G8 characters and a few props - all optimised with the 4k textures reduced by half. Try to add a fourth G8 (or G3) and I'm back to CPU immediately. If the 3070 (which looks like it will once again be the one for my price range) is offered with only 8GB again I really think I will look for something else to play with in my spare time. 

    I've also seen mention that the answer is compositing but, if I understand the idea correctly, that means rendering out a static background scene and then a scene with the characters. I can't see that workflow working for me because I need to move my characters and move the camera so they are in different positions relative to the background. So I would have to render the background again for each new scene (and I tend to have 60 - 100 scenes in a project). Also, I optimise my props too so my backgrounds (usually indoor rooms with shaders rather than texture maps where possible) take up a small percentage of the total VRAM usage. Human skin realism takes lots of maps and they are mostly 4k maps which is why the Scene Optimiser is my most-used utility.

    I don't think NVidia care too much about IRay users as it seems to be gaming that drives the progress and VRAM is low on the list of prioities for gaming, from what I understand from reading this forum and various online articles.

    When subD are you using? Either use only normal/displacement maps with subD at one or use no normal/displacement with subD at 3 or 4 or even 5 if you want to try.. 

    When I use Scene Optimizer (always) I select  level 3 for the mesh resolution. I have not been deleting Normal/Displacement maps because I'm not sure whether the HD details are identical to the normal/displacement maps - I suspect that they are not. 

    If they have normal maps preset options ON or OFF then after you change subD to 3 you turn the normal maps to OFF with the preset. You are right not to manually mess with the surfaces if you don't know for sure. PAs should have supplied presets to turn the normal details ON or OFF. Don't forget that some geographed addons have also subD sliders and normal maps presets ON or OFF and they do not combine sensibly.

    I did a couple of experiments in IRay using a skin which has normal maps and HD  (Marilla by iSouceTextures). Just the G8F, no hair or clothing although I did have geo-grafts as this would be normal for my characters.

    1. Marilla G8F skin with HD level 3 and Normal maps applied - GPU-Z reported VRAM at 4600MB.

    2. HD level 3 but removed all normal maps - GPU-Z: 4300MB

    3. HD Level 4 with normal maps still removed - GPU-Z: 5200MB.

    So this tells me that HD level makes a considerable difference to VRAm usage. Indeed, more so than removing the 4K normal maps.

    [EDIT] I loaded the base Marilla (i.e. no geo-grafts) and the difference was negligable as the geo-grafts seem to take up only about 60MB.

    So not a majority of RAM free but enough to make a difference on whether your scene got kicked out to the CPU. For 4 characters 300 MB is 1.2 GB and 4 geographs 60MB is 240 MB bringing the total to 1.45 GB. Some of that would be instanced possibly depending on your material sets & such. I know when I first started with DAZ Studio and had only a intel laptop with no discrete GPU and only 4 GB system RAM I could only CPU render 3 Genesis 3 characters in the DAZ Carnival sets (most of the carnival sets were occluded) before I ran out of system RAM. I was not zoomed in at all though on the characters but the renders were at 4K.

  • nonesuch00nonesuch00 Posts: 18,320
    TheKD said:
    marble said:
     

    2. HD level 3 but removed all normal maps - GPU-Z: 4300MB

    3. HD Level 4 with normal maps still removed - GPU-Z: 5200MB.

    So this tells me that HD level makes a considerable difference to VRAm usage. Indeed, more so than removing the 4K normal maps.

    [EDIT] I loaded the base Marilla (i.e. no geo-grafts) and the difference was negligable as the geo-grafts seem to take up only about 60MB.

    I did a test for g8f, g8m should be close to the same numbers I assume. At lvl 3 subdivision, g8f has ~1,047,552 polygons. At lvl 4 it's ~4,190,208 polygons. So it makes a bit of sense that going up to lvl 4 is going to have a pretty big jump in VRAM. Up to lvl 5 would be ~16,760,832 lol.

    It would be cool if we had more control over subd, like keep all the areas that were covered by clothes at subd1, areas like the shin and forarms that are exposed but don't have any details that need a lot of polygons at subd2, but the face that needs to show a wicked scar bump it up to subd5. Like a subd weightmap or something.

    If they are wearing clothing then when the scene is sent to be rendered to the renderer it is rebuilt and the hidden parts of the models, including the body under the clothing, is deleted. I think that's why you do end up with rendering artifacts from time to time when that process doesn't go exactly right.

    Now if you use iRay Preview rendering it's a completely different process and the hidden polygons aren't deleted I'm pretty sure but it would be nice to know for sure.

  • marblemarble Posts: 7,500
    marble said:
    marble said:
    marble said:
    Drip said:

    ... and the most important bit: how much VRAM will it have. 8GB is quite decent, and is enough for most of my needs. But, an increase to, say, 12 GB or more could make the 3070 more future proof, as assets seem to get more complex geometry again and texture sizes are slowly getting bigger as well. It's just a matter of time before someone includes 4k nail textures on a model, without considering customers with slower rigs, or the fact that in 99.9% of the renders, one wouldn't see the difference between nails with 256 vs 4k textures. Current designers are generally concious about this, or made it a second nature to optimize textures. I mainly worry about new artists who never had to worry about memory limitations or had no need to think logically about what's necessary.

    I just don't understand this claim at all. I have a 1070 with 8GB and I hit that limit very easily with just three G3/G8 characters and a few props - all optimised with the 4k textures reduced by half. Try to add a fourth G8 (or G3) and I'm back to CPU immediately. If the 3070 (which looks like it will once again be the one for my price range) is offered with only 8GB again I really think I will look for something else to play with in my spare time. 

    I've also seen mention that the answer is compositing but, if I understand the idea correctly, that means rendering out a static background scene and then a scene with the characters. I can't see that workflow working for me because I need to move my characters and move the camera so they are in different positions relative to the background. So I would have to render the background again for each new scene (and I tend to have 60 - 100 scenes in a project). Also, I optimise my props too so my backgrounds (usually indoor rooms with shaders rather than texture maps where possible) take up a small percentage of the total VRAM usage. Human skin realism takes lots of maps and they are mostly 4k maps which is why the Scene Optimiser is my most-used utility.

    I don't think NVidia care too much about IRay users as it seems to be gaming that drives the progress and VRAM is low on the list of prioities for gaming, from what I understand from reading this forum and various online articles.

    When subD are you using? Either use only normal/displacement maps with subD at one or use no normal/displacement with subD at 3 or 4 or even 5 if you want to try.. 

    When I use Scene Optimizer (always) I select  level 3 for the mesh resolution. I have not been deleting Normal/Displacement maps because I'm not sure whether the HD details are identical to the normal/displacement maps - I suspect that they are not. 

    If they have normal maps preset options ON or OFF then after you change subD to 3 you turn the normal maps to OFF with the preset. You are right not to manually mess with the surfaces if you don't know for sure. PAs should have supplied presets to turn the normal details ON or OFF. Don't forget that some geographed addons have also subD sliders and normal maps presets ON or OFF and they do not combine sensibly.

    I did a couple of experiments in IRay using a skin which has normal maps and HD  (Marilla by iSouceTextures). Just the G8F, no hair or clothing although I did have geo-grafts as this would be normal for my characters.

    1. Marilla G8F skin with HD level 3 and Normal maps applied - GPU-Z reported VRAM at 4600MB.

    2. HD level 3 but removed all normal maps - GPU-Z: 4300MB

    3. HD Level 4 with normal maps still removed - GPU-Z: 5200MB.

    So this tells me that HD level makes a considerable difference to VRAm usage. Indeed, more so than removing the 4K normal maps.

    [EDIT] I loaded the base Marilla (i.e. no geo-grafts) and the difference was negligable as the geo-grafts seem to take up only about 60MB.

    So not a majority of RAM free but enough to make a difference on whether your scene got kicked out to the CPU. For 4 characters 300 MB is 1.2 GB and 4 geographs 60MB is 240 MB bringing the total to 1.45 GB. Some of that would be instanced possibly depending on your material sets & such. I know when I first started with DAZ Studio and had only a intel laptop with no discrete GPU and only 4 GB system RAM I could only CPU render 3 Genesis 3 characters in the DAZ Carnival sets (most of the carnival sets were occluded) before I ran out of system RAM. I was not zoomed in at all though on the characters but the renders were at 4K.

    What I have also noticed is that the VRAM increase is not linear with the addition of figures. I guess this is to be expected if some share the same textures but I have had scenes where GPU-Z was reporting 7GB with only two characters, clothing and environment (props, buildings, etc.). Then I added a third character expecting it to drop to CPU but it still came in under 8GB. I would need to do a lot more experiments to determine whether shared textures were a factor but I also suspect that the compression algorithm used by IRay is not linear and is more pronounced as more content is added.

  • marblemarble Posts: 7,500

    Oh boy. I'm considering selling my car and using the bus to get the money for one ;)

    Seriously though, does that suggest that the 16GB 3070 might be true too - and at what price?

  • outrider42outrider42 Posts: 3,679

    Keep in mind that any prices you hear are totally not final. Specs may be pretty final here, but prices are a wild card. So this next rumor is already highly questionable.

    Some new rumors are saying the 3090 will release after all, and have a big price tag. I think it makes sense, in a way, it is all about the name. By calling it a "3090", that implies a higher tier than a "3080ti" would, thus why they would charge more for it. The 3090 in this rumor would have 24GB and cost $1400, the Founders Edition would be $1500. I know exactly what CEO Huang will say, he will compare the 3090 not to the 2080ti, but to the Turing Titan RTX. The Titan also had 24GB of VRAM, and the 3090 would absolutely smash the last gen Titan in performance. And the Titan was $2500. So the 3090 for $1500....hey that's a great deal!!! LOL. But seriously, it could be worse, IMO. If it indeed has 24GB of VRAM and offers so much more performance, I think a fair number of Dazzers will be happy to do that. Yes, I said Dazzers.

    I'm sure the pricing will raise up some debate. I think I mentioned this before, about whether it is called a 3080ti or 3090. If it is called a 3080ti, then the pricing would be close to the 2080ti. But calling it a 3090 changes it up just a bit. The question is if the 3080ti exists at the same time, where does it fit in the lineup. Perhaps $1100? There is still a gap between the 3080 and 3090 to be filled.

    The rest of the cards seem to have prices more in line with their Turing counterparts. Like $500 for 3070, that would equal launch price of the 2070. The Founders Edition of the 2070 was $600. The 3080 would be $800. The 2080 was $700, with the FE being $800.

    I would assume these would be the lower VRAM versions, with the extra VRAM versions costing more.

    BUT remember that prices can change in an instant. These are probably just place holders somewhere. A common phrase is that CEO Huang may not finalize the prices until an hour before the presentation, based on the information he has. He'll put on his best mic drop performance at this presentation to stomp AMD down, no matter what they release later on. But at the same time, he isn't going to go cheap, either. Like my example above, Nvidia will create a whole new tier in order to keep prices up. In the past the top card was called the x80, then they released a Titan and made it the top card. Then they added x80ti to the lineup. Its all in the name.

  • kenshaw011267kenshaw011267 Posts: 3,805

    24Gb of GDDR6X by itself, cost to the manufacturer, is more than $400US (At best. These cards are the first with it. We have no idea about yield and if yields are not good price can go much higher.) 

    Micron has specifically said the card has 12Gb. They make the VRAM.

  • @outrider42 @kenshaw011267

    I know, I know... cleary a triumph of hope over experience on my part. I'm just dreaming of $2800 plus an NVLink bridge getting me 48GB of VRAM in Cycles. I don't think it'd be as fast as my current system, but on final analysis, I really don't care.

  • kyoto kidkyoto kid Posts: 41,256
    edited August 2020
    Post edited by kyoto kid on
  • nicsttnicstt Posts: 11,715
    kyoto kid said:

    yeh, I'm waiting till i see what AMD do - as they work in Cycles; I object to being forced to deal with Nvidia.

    I might chose to use their products but a render engine that only works on one brand is annoying at best.

  • nicsttnicstt Posts: 11,715
    edited August 2020
    marble said:

    Oh boy. I'm considering selling my car and using the bus to get the money for one ;)

    Seriously though, does that suggest that the 16GB 3070 might be true too - and at what price?

    Save yourself some cash and use Blender.

    The most interesting take I have from all this guess work and possible speculation, is that if the 3090 turns out to be the new beast from the green team, it allows them the opportunity to add a TI variant in the advent that team red kicks their arse.

    Post edited by nicstt on
  • nonesuch00nonesuch00 Posts: 18,320

    Yes, me too. Or doing odd jobs to save up and buy one. 24GB with realtime ray tracing is awesome and just what I need.

  • All I want is a RTX 3070 with 12 GB to 16GB of GDDR6

  • outrider42outrider42 Posts: 3,679

    24Gb of GDDR6X by itself, cost to the manufacturer, is more than $400US (At best. These cards are the first with it. We have no idea about yield and if yields are not good price can go much higher.) 

    Micron has specifically said the card has 12Gb. They make the VRAM.

    Nvidia would not allow Micron to just casually announce a spec like that. People didn't even know DDR6X existed at all until just now. Keep in mind that rumors have been talking about 21gbs VRAM for a long time without any evidence it existed. So some people were dismissing those rumors. The 24gb rumor has also been around a long time and has come from multiple sources.

    It would be the most expensive part, but if true it is still a $1400 card as a starting price. This card will still easily clear several hundred above cost of production. And the Nvidia's Founder Edition would be $1500. So like the 2080ti before it, most 3rd party cards would probably be around that $1500 mark, and probably more. The 2080ti had a base price of $1000, but how many cards were actually that price? So I would expect a fair number of cards to be in the $1600 range.

    Again that's assuming the prices are correct.
  • kenshaw011267kenshaw011267 Posts: 3,805

    24Gb of GDDR6X by itself, cost to the manufacturer, is more than $400US (At best. These cards are the first with it. We have no idea about yield and if yields are not good price can go much higher.) 

    Micron has specifically said the card has 12Gb. They make the VRAM.

     

    Nvidia would not allow Micron to just casually announce a spec like that. People didn't even know DDR6X existed at all until just now. Keep in mind that rumors have been talking about 21gbs VRAM for a long time without any evidence it existed. So some people were dismissing those rumors. The 24gb rumor has also been around a long time and has come from multiple sources.

     

    It would be the most expensive part, but if true it is still a $1400 card as a starting price. This card will still easily clear several hundred above cost of production. And the Nvidia's Founder Edition would be $1500. So like the 2080ti before it, most 3rd party cards would probably be around that $1500 mark, and probably more. The 2080ti had a base price of $1000, but how many cards were actually that price? So I would expect a fair number of cards to be in the $1600 range.

     

    Again that's assuming the prices are correct.

    !

    $400 For just the VRAM

    Add in the cost of the GPU (NVidia won't tell but based on other chips figure $200)

    Power components for the reference 2080ti came out to around $75

    PCB (traces, layers etc.) ~$100

    So ballpark of $775 for the parts (not for the manufacturing just the pieces).

    It's all surface mount which is reasonably cheap, luckily. A few bucks per component (including testing and QA). But there are about a hundred components on each board. Call it $250 each.

    $1025

    No packaging. No design of the AIB card. No shipping. No warehousing costs anywhere in the supply chain. No Nvidia cut. Add those in and where does the profit come from?

    A 3090 with 24Gb of VRAM is going to cost a lot more than $1400. There just no way the AIB's can make any money at that price point. Math remains a thing.

  • nonesuch00nonesuch00 Posts: 18,320

    24Gb of GDDR6X by itself, cost to the manufacturer, is more than $400US (At best. These cards are the first with it. We have no idea about yield and if yields are not good price can go much higher.) 

    Micron has specifically said the card has 12Gb. They make the VRAM.

     

    Nvidia would not allow Micron to just casually announce a spec like that. People didn't even know DDR6X existed at all until just now. Keep in mind that rumors have been talking about 21gbs VRAM for a long time without any evidence it existed. So some people were dismissing those rumors. The 24gb rumor has also been around a long time and has come from multiple sources.

     

    It would be the most expensive part, but if true it is still a $1400 card as a starting price. This card will still easily clear several hundred above cost of production. And the Nvidia's Founder Edition would be $1500. So like the 2080ti before it, most 3rd party cards would probably be around that $1500 mark, and probably more. The 2080ti had a base price of $1000, but how many cards were actually that price? So I would expect a fair number of cards to be in the $1600 range.

     

    Again that's assuming the prices are correct.

    !

    $400 For just the VRAM

    Add in the cost of the GPU (NVidia won't tell but based on other chips figure $200)

    Power components for the reference 2080ti came out to around $75

    PCB (traces, layers etc.) ~$100

    So ballpark of $775 for the parts (not for the manufacturing just the pieces).

    It's all surface mount which is reasonably cheap, luckily. A few bucks per component (including testing and QA). But there are about a hundred components on each board. Call it $250 each.

    $1025

    No packaging. No design of the AIB card. No shipping. No warehousing costs anywhere in the supply chain. No Nvidia cut. Add those in and where does the profit come from?

    A 3090 with 24Gb of VRAM is going to cost a lot more than $1400. There just no way the AIB's can make any money at that price point. Math remains a thing.

    $400 VRAM would be a hypothetical made up list price for consumers I think.

    I think the 3090 will be less than $1400, if not on release then not long after when the top of the line Big Navi is announced.

  • kenshaw011267kenshaw011267 Posts: 3,805

    24Gb of GDDR6X by itself, cost to the manufacturer, is more than $400US (At best. These cards are the first with it. We have no idea about yield and if yields are not good price can go much higher.) 

    Micron has specifically said the card has 12Gb. They make the VRAM.

     

    Nvidia would not allow Micron to just casually announce a spec like that. People didn't even know DDR6X existed at all until just now. Keep in mind that rumors have been talking about 21gbs VRAM for a long time without any evidence it existed. So some people were dismissing those rumors. The 24gb rumor has also been around a long time and has come from multiple sources.

     

    It would be the most expensive part, but if true it is still a $1400 card as a starting price. This card will still easily clear several hundred above cost of production. And the Nvidia's Founder Edition would be $1500. So like the 2080ti before it, most 3rd party cards would probably be around that $1500 mark, and probably more. The 2080ti had a base price of $1000, but how many cards were actually that price? So I would expect a fair number of cards to be in the $1600 range.

     

    Again that's assuming the prices are correct.

    !

    $400 For just the VRAM

    Add in the cost of the GPU (NVidia won't tell but based on other chips figure $200)

    Power components for the reference 2080ti came out to around $75

    PCB (traces, layers etc.) ~$100

    So ballpark of $775 for the parts (not for the manufacturing just the pieces).

    It's all surface mount which is reasonably cheap, luckily. A few bucks per component (including testing and QA). But there are about a hundred components on each board. Call it $250 each.

    $1025

    No packaging. No design of the AIB card. No shipping. No warehousing costs anywhere in the supply chain. No Nvidia cut. Add those in and where does the profit come from?

    A 3090 with 24Gb of VRAM is going to cost a lot more than $1400. There just no way the AIB's can make any money at that price point. Math remains a thing.

    $400 VRAM would be a hypothetical made up list price for consumers I think.

    I think the 3090 will be less than $1400, if not on release then not long after when the top of the line Big Navi is announced.

    The only consumers for GDDR6X chips are GPU card and cell phone manufacturers. We know precisely what GDDR6 costs to manufacturers, Micron and others make those price lists publicly available (this is required in several places as part of anti trust laws). Micron has said how much more expensive they expect GDDR6x to be initially (2.5x at least which is also what GDDR5x was). You are welcome to go find the current cost of Micron's GDDR6 chips and do the math yourself. Everything I posted is based off public data.

    There is a reason people who know the semiconductor industry have been very skeptical of these claims of massive increases of VRAM amounts on Ampere. They costs of GDDR6 is too high to allow it on the cards. The idea that the 3090, the 2080ti replacement would have 24Gb of GDDR6X and be a mere $100 higher than the 2080ti is just not credible.

    Add in that Micron itself has said that the card has 12Gb of VRAM...

  • The only consumers for GDDR6X chips are GPU card and cell phone manufacturers. We know precisely what GDDR6 costs to manufacturers, Micron and others make those price lists publicly available (this is required in several places as part of anti trust laws). Micron has said how much more expensive they expect GDDR6x to be initially (2.5x at least which is also what GDDR5x was). You are welcome to go find the current cost of Micron's GDDR6 chips and do the math yourself. Everything I posted is based off public data.

    There is a reason people who know the semiconductor industry have been very skeptical of these claims of massive increases of VRAM amounts on Ampere. They costs of GDDR6 is too high to allow it on the cards. The idea that the 3090, the 2080ti replacement would have 24Gb of GDDR6X and be a mere $100 higher than the 2080ti is just not credible.

    Add in that Micron itself has said that the card has 12Gb of VRAM...

    Everything you posted may be public data, but it is not relevant because that is not how a $300 billion company procures. Unless you are a true Competitive Intelligence professional, you can't be sure what price NVidia has negotiated, nor what kind of futures contract they were able to buy. You probably just discovered the upper limit.

    I'm not saying that the rumors are credible, just that the certainty with which you write is unwarranted because you can't possibly know what you would need to know to be so certain. Math is certainly still a thing, but so is "trash in, trash out".

  • kenshaw011267kenshaw011267 Posts: 3,805

    The only consumers for GDDR6X chips are GPU card and cell phone manufacturers. We know precisely what GDDR6 costs to manufacturers, Micron and others make those price lists publicly available (this is required in several places as part of anti trust laws). Micron has said how much more expensive they expect GDDR6x to be initially (2.5x at least which is also what GDDR5x was). You are welcome to go find the current cost of Micron's GDDR6 chips and do the math yourself. Everything I posted is based off public data.

    There is a reason people who know the semiconductor industry have been very skeptical of these claims of massive increases of VRAM amounts on Ampere. They costs of GDDR6 is too high to allow it on the cards. The idea that the 3090, the 2080ti replacement would have 24Gb of GDDR6X and be a mere $100 higher than the 2080ti is just not credible.

    Add in that Micron itself has said that the card has 12Gb of VRAM...

    Everything you posted may be public data, but it is not relevant because that is not how a $300 billion company procures. Unless you are a true Competitive Intelligence professional, you can't be sure what price NVidia has negotiated, nor what kind of futures contract they were able to buy. You probably just discovered the upper limit.

    I'm not saying that the rumors are credible, just that the certainty with which you write is unwarranted because you can't possibly know what you would need to know to be so certain. Math is certainly still a thing, but so is "trash in, trash out".

    Nvidia buys hardly any of these chips. The partners do and I absolutely do know what they pay because as I just stated Micron et al has to publish what they charge. And none of the partners sell enough volume to get any sort of discount. 

    There are three companies that make all the RAM. Micron, Samsung and SK Hynix. So when the demand/price fluctuates they can shift production from one sort to another. This greatly affects the enterprise world so it is tracked very closely. I get the DRAM prices in an email every morning. I can easily enough go to the same source and get the GDDR prices. These are not secrets. After the companies got busted the last, or the time before that, for price fixing they had to start publishing their pricing.

  • nonesuch00nonesuch00 Posts: 18,320

    24Gb of GDDR6X by itself, cost to the manufacturer, is more than $400US (At best. These cards are the first with it. We have no idea about yield and if yields are not good price can go much higher.) 

    Micron has specifically said the card has 12Gb. They make the VRAM.

     

    Nvidia would not allow Micron to just casually announce a spec like that. People didn't even know DDR6X existed at all until just now. Keep in mind that rumors have been talking about 21gbs VRAM for a long time without any evidence it existed. So some people were dismissing those rumors. The 24gb rumor has also been around a long time and has come from multiple sources.

     

    It would be the most expensive part, but if true it is still a $1400 card as a starting price. This card will still easily clear several hundred above cost of production. And the Nvidia's Founder Edition would be $1500. So like the 2080ti before it, most 3rd party cards would probably be around that $1500 mark, and probably more. The 2080ti had a base price of $1000, but how many cards were actually that price? So I would expect a fair number of cards to be in the $1600 range.

     

    Again that's assuming the prices are correct.

    !

    $400 For just the VRAM

    Add in the cost of the GPU (NVidia won't tell but based on other chips figure $200)

    Power components for the reference 2080ti came out to around $75

    PCB (traces, layers etc.) ~$100

    So ballpark of $775 for the parts (not for the manufacturing just the pieces).

    It's all surface mount which is reasonably cheap, luckily. A few bucks per component (including testing and QA). But there are about a hundred components on each board. Call it $250 each.

    $1025

    No packaging. No design of the AIB card. No shipping. No warehousing costs anywhere in the supply chain. No Nvidia cut. Add those in and where does the profit come from?

    A 3090 with 24Gb of VRAM is going to cost a lot more than $1400. There just no way the AIB's can make any money at that price point. Math remains a thing.

    $400 VRAM would be a hypothetical made up list price for consumers I think.

    I think the 3090 will be less than $1400, if not on release then not long after when the top of the line Big Navi is announced.

    The only consumers for GDDR6X chips are GPU card and cell phone manufacturers. We know precisely what GDDR6 costs to manufacturers, Micron and others make those price lists publicly available (this is required in several places as part of anti trust laws). Micron has said how much more expensive they expect GDDR6x to be initially (2.5x at least which is also what GDDR5x was). You are welcome to go find the current cost of Micron's GDDR6 chips and do the math yourself. Everything I posted is based off public data.

    There is a reason people who know the semiconductor industry have been very skeptical of these claims of massive increases of VRAM amounts on Ampere. They costs of GDDR6 is too high to allow it on the cards. The idea that the 3090, the 2080ti replacement would have 24Gb of GDDR6X and be a mere $100 higher than the 2080ti is just not credible.

    Add in that Micron itself has said that the card has 12Gb of VRAM...

    I'm not much of a tech hound, the two new computers I got this year was the 1st brand new hardware I've bought since 2006, but I've read that 12GB number too from one of the many articles posted to these forums but but instead of what you claim: when I read this 12GB RAM claim, hold onto your hat, the 12GB RAM claim was per side of the GPU card so that the card actually has 24GB RAM total. I've never seen a GPU like that but apparently they are needed sometimes. I have seen plenty of RAM memory sticks that are double sided though.

    I don't believe for a second nVidia is paying $400 USD for 12GB GDDR6X RAM and I don't believe for a second the 3090 is going to have only 12 GB RAM, not only because what all those rumour sites claim but because we all already know what is already on the entire lineup of nVidia 20XX series of GPU. A 3090 card with only 12GB RAM would be a major blunder on nVidia's part, one I don't think they will make. They know to make realtime raytracing real fast their GPU cards must have enough RAM their GPU has superfast access to, there is no way around it. 

  • nicsttnicstt Posts: 11,715

    24Gb of GDDR6X by itself, cost to the manufacturer, is more than $400US (At best. These cards are the first with it. We have no idea about yield and if yields are not good price can go much higher.) 

    Micron has specifically said the card has 12Gb. They make the VRAM.

     

    Nvidia would not allow Micron to just casually announce a spec like that. People didn't even know DDR6X existed at all until just now. Keep in mind that rumors have been talking about 21gbs VRAM for a long time without any evidence it existed. So some people were dismissing those rumors. The 24gb rumor has also been around a long time and has come from multiple sources.

     

    It would be the most expensive part, but if true it is still a $1400 card as a starting price. This card will still easily clear several hundred above cost of production. And the Nvidia's Founder Edition would be $1500. So like the 2080ti before it, most 3rd party cards would probably be around that $1500 mark, and probably more. The 2080ti had a base price of $1000, but how many cards were actually that price? So I would expect a fair number of cards to be in the $1600 range.

     

    Again that's assuming the prices are correct.

    !

    $400 For just the VRAM

    Add in the cost of the GPU (NVidia won't tell but based on other chips figure $200)

    Power components for the reference 2080ti came out to around $75

    PCB (traces, layers etc.) ~$100

    So ballpark of $775 for the parts (not for the manufacturing just the pieces).

    It's all surface mount which is reasonably cheap, luckily. A few bucks per component (including testing and QA). But there are about a hundred components on each board. Call it $250 each.

    $1025

    No packaging. No design of the AIB card. No shipping. No warehousing costs anywhere in the supply chain. No Nvidia cut. Add those in and where does the profit come from?

    A 3090 with 24Gb of VRAM is going to cost a lot more than $1400. There just no way the AIB's can make any money at that price point. Math remains a thing.

    $400 VRAM would be a hypothetical made up list price for consumers I think.

    I think the 3090 will be less than $1400, if not on release then not long after when the top of the line Big Navi is announced.

    The only consumers for GDDR6X chips are GPU card and cell phone manufacturers. We know precisely what GDDR6 costs to manufacturers, Micron and others make those price lists publicly available (this is required in several places as part of anti trust laws). Micron has said how much more expensive they expect GDDR6x to be initially (2.5x at least which is also what GDDR5x was). You are welcome to go find the current cost of Micron's GDDR6 chips and do the math yourself. Everything I posted is based off public data.

    There is a reason people who know the semiconductor industry have been very skeptical of these claims of massive increases of VRAM amounts on Ampere. They costs of GDDR6 is too high to allow it on the cards. The idea that the 3090, the 2080ti replacement would have 24Gb of GDDR6X and be a mere $100 higher than the 2080ti is just not credible.

    Add in that Micron itself has said that the card has 12Gb of VRAM...

    I'm not much of a tech hound, the two new computers I got this year was the 1st brand new hardware I've bought since 2006, but I've read that 12GB number too from one of the many articles posted to these forums but but instead of what you claim: when I read this 12GB RAM claim, hold onto your hat, the 12GB RAM claim was per side of the GPU card so that the card actually has 24GB RAM total. I've never seen a GPU like that but apparently they are needed sometimes. I have seen plenty of RAM memory sticks that are double sided though.

    I don't believe for a second nVidia is paying $400 USD for 12GB GDDR6X RAM and I don't believe for a second the 3090 is going to have only 12 GB RAM, not only because what all those rumour sites claim but because we all already know what is already on the entire lineup of nVidia 20XX series of GPU. A 3090 card with only 12GB RAM would be a major blunder on nVidia's part, one I don't think they will make. They know to make realtime raytracing real fast their GPU cards must have enough RAM their GPU has superfast access to, there is no way around it. 

    It doesn't really matter what we believe, or how much we've read about on the net.

    ... Until it's anounced it is at best speculation, with much of it being guesswork if not pure guesswork.

    When the time comes, some will claim to have been right; that isn't because they had accurate data, only that some guesses always stand a chance of being correct.

  • kenshaw011267kenshaw011267 Posts: 3,805

    24Gb of GDDR6X by itself, cost to the manufacturer, is more than $400US (At best. These cards are the first with it. We have no idea about yield and if yields are not good price can go much higher.) 

    Micron has specifically said the card has 12Gb. They make the VRAM.

     

    Nvidia would not allow Micron to just casually announce a spec like that. People didn't even know DDR6X existed at all until just now. Keep in mind that rumors have been talking about 21gbs VRAM for a long time without any evidence it existed. So some people were dismissing those rumors. The 24gb rumor has also been around a long time and has come from multiple sources.

     

    It would be the most expensive part, but if true it is still a $1400 card as a starting price. This card will still easily clear several hundred above cost of production. And the Nvidia's Founder Edition would be $1500. So like the 2080ti before it, most 3rd party cards would probably be around that $1500 mark, and probably more. The 2080ti had a base price of $1000, but how many cards were actually that price? So I would expect a fair number of cards to be in the $1600 range.

     

    Again that's assuming the prices are correct.

    !

    $400 For just the VRAM

    Add in the cost of the GPU (NVidia won't tell but based on other chips figure $200)

    Power components for the reference 2080ti came out to around $75

    PCB (traces, layers etc.) ~$100

    So ballpark of $775 for the parts (not for the manufacturing just the pieces).

    It's all surface mount which is reasonably cheap, luckily. A few bucks per component (including testing and QA). But there are about a hundred components on each board. Call it $250 each.

    $1025

    No packaging. No design of the AIB card. No shipping. No warehousing costs anywhere in the supply chain. No Nvidia cut. Add those in and where does the profit come from?

    A 3090 with 24Gb of VRAM is going to cost a lot more than $1400. There just no way the AIB's can make any money at that price point. Math remains a thing.

    $400 VRAM would be a hypothetical made up list price for consumers I think.

    I think the 3090 will be less than $1400, if not on release then not long after when the top of the line Big Navi is announced.

    The only consumers for GDDR6X chips are GPU card and cell phone manufacturers. We know precisely what GDDR6 costs to manufacturers, Micron and others make those price lists publicly available (this is required in several places as part of anti trust laws). Micron has said how much more expensive they expect GDDR6x to be initially (2.5x at least which is also what GDDR5x was). You are welcome to go find the current cost of Micron's GDDR6 chips and do the math yourself. Everything I posted is based off public data.

    There is a reason people who know the semiconductor industry have been very skeptical of these claims of massive increases of VRAM amounts on Ampere. They costs of GDDR6 is too high to allow it on the cards. The idea that the 3090, the 2080ti replacement would have 24Gb of GDDR6X and be a mere $100 higher than the 2080ti is just not credible.

    Add in that Micron itself has said that the card has 12Gb of VRAM...

    I'm not much of a tech hound, the two new computers I got this year was the 1st brand new hardware I've bought since 2006, but I've read that 12GB number too from one of the many articles posted to these forums but but instead of what you claim: when I read this 12GB RAM claim, hold onto your hat, the 12GB RAM claim was per side of the GPU card so that the card actually has 24GB RAM total. I've never seen a GPU like that but apparently they are needed sometimes. I have seen plenty of RAM memory sticks that are double sided though.

    I don't believe for a second nVidia is paying $400 USD for 12GB GDDR6X RAM and I don't believe for a second the 3090 is going to have only 12 GB RAM, not only because what all those rumour sites claim but because we all already know what is already on the entire lineup of nVidia 20XX series of GPU. A 3090 card with only 12GB RAM would be a major blunder on nVidia's part, one I don't think they will make. They know to make realtime raytracing real fast their GPU cards must have enough RAM their GPU has superfast access to, there is no way around it. 

    I never said $400 for 12Gb, I wrote $400 for 24Gb. And Micron did not say anything at all about it being one side of the card. They just said it would be 12Gb. Putting chips on both sides of the card is less than ideal on a consumer card. They need to be cooled and would mean adding some sort of cooling to the backside of the PCB making the cards even thicker messing with spacing even more. Instead they would just go with 2 Gb chips. That's how the current RTX Titan works. That would be far better than making a card that is not just 2 or 3 slots wide but also protrudes backwards. 

    Going up 1 Gb from the 2080ti would be fine. It's not like there are any games out there that are pushing the boundaries of 11Gb. Consumer cards are sold to gamers and for gamers 8Gb is still more than enough (Cyberpunk is the big touchstone release on the horizon and the recommended HW is a 2060 so it will run with 6Gb of VRAM). They're bumping up to 12 to match the consoles for those who play at 4k, still a tiny minority of gamers.

  • 24Gb of GDDR6X by itself, cost to the manufacturer, is more than $400US (At best. These cards are the first with it. We have no idea about yield and if yields are not good price can go much higher.) 

    Micron has specifically said the card has 12Gb. They make the VRAM.

    Agreed. This is a complete load of bollocks, to be honest. Anyone believing that Nvidia is going to put 24 Gb VRAM into their "flagship" graphics card that will be supposedly near equal to or more powerful than a Titan RTX(having the same amount of VRAM) and then sell it for half the price of a Titan RTX is living a pipe dream. I don't see there being any significant increase in the VRAM amounts. There's just no point to it.

  • kyoto kidkyoto kid Posts: 41,256

    ...

Sign In or Register to comment.