Nvidia Ampere (2080 Ti, etc. replacements) and other rumors...

1202123252645

Comments

  • duckbomb said:

    *bangs head against wall*

    dedicated display cards are POINTLESS!

    Windows always reserves VRAM on every consumer grade video card as if it hard a monitor connected. It may even actually have the card try to output a signal. That is less clear. But there is clearly nothing to be gained by not having every Nvidia card installed selected for use in rendering.

    This myth that there is some benefit to having dedicated cards needs to die. People are wasting money and resources on it.

    What if you have one card that is doing DAZ rendering and the other one you have excluded from rendering in DAZ... wouldn't that be ideal for actually being able to use the machine while it's rendering? 

    I'm not trying to agitate here, it's just what I've always been led to believe.  Honestly, I have two cards but the GTX 1660 is disabled in BIOS because I couldnt' get the Quadro to run correctly (driver conflicts) when I was using it, so I just decided to use the one for everything, but in the situation above I'd think you'd see a benefit from running a dedicated "display card".

    If the card is disabled why is it plugged in? And again no. There is no benefit when all the cards are consumer grade. NONE.

    If you have a pro card or prosumer card that can be put in TCC mode and have the know how to do so then there is a benefit.

    I have 2 cards. I use both for rendering. I use my computer while rendering without issue. I think the people who have issues with using their systems while rendering are either using the CPU, which always pegs the CPU,or have very low spec GPU's (which likely means they fail over to CPU). That's the reason WDDM reserves VRAM in the first place.

    I have the disabled card plugged in because I don't want to power down and open my computer.  It'll probably annoy tech people here to know that I also have the Quadro RTX 8000 plugged into the slower PCIE slot because I misunderstood which was which.  But it works for me.

    My computer is actually faster than I ever need it to be, so I haven't taken the time to optimize it, but I probably should.

    Its interesting to have learned there's no point to a "monitor card", so thanks for that.  I'll probably just keep that disabled one in there for the foreseeable future because I don't like opening it up lol.

  • joseftjoseft Posts: 310
    duckbomb said:
    duckbomb said:

    *bangs head against wall*

    dedicated display cards are POINTLESS!

    Windows always reserves VRAM on every consumer grade video card as if it hard a monitor connected. It may even actually have the card try to output a signal. That is less clear. But there is clearly nothing to be gained by not having every Nvidia card installed selected for use in rendering.

    This myth that there is some benefit to having dedicated cards needs to die. People are wasting money and resources on it.

    What if you have one card that is doing DAZ rendering and the other one you have excluded from rendering in DAZ... wouldn't that be ideal for actually being able to use the machine while it's rendering? 

    I'm not trying to agitate here, it's just what I've always been led to believe.  Honestly, I have two cards but the GTX 1660 is disabled in BIOS because I couldnt' get the Quadro to run correctly (driver conflicts) when I was using it, so I just decided to use the one for everything, but in the situation above I'd think you'd see a benefit from running a dedicated "display card".

    If the card is disabled why is it plugged in? And again no. There is no benefit when all the cards are consumer grade. NONE.

    If you have a pro card or prosumer card that can be put in TCC mode and have the know how to do so then there is a benefit.

    I have 2 cards. I use both for rendering. I use my computer while rendering without issue. I think the people who have issues with using their systems while rendering are either using the CPU, which always pegs the CPU,or have very low spec GPU's (which likely means they fail over to CPU). That's the reason WDDM reserves VRAM in the first place.

    I have the disabled card plugged in because I don't want to power down and open my computer.  It'll probably annoy tech people here to know that I also have the Quadro RTX 8000 plugged into the slower PCIE slot because I misunderstood which was which.  But it works for me.

    My computer is actually faster than I ever need it to be, so I haven't taken the time to optimize it, but I probably should.

    Its interesting to have learned there's no point to a "monitor card", so thanks for that.  I'll probably just keep that disabled one in there for the foreseeable future because I don't like opening it up lol.

    I disagree with his statement about there being no point. At least in the most literal sense you could interpret it.

    For many people there may not be any point. Not everyone's use-case is the same. Therefor, for some people, there may indeed be a good reason to run a card specificaly to drive displays. 

    For someone who isnt doing any major multitasking, sure, no reason not to use all available GPU's to power a render. But if i wanted to start rendering an animation, and while that is chugging away, play a game. If all available GPUs are running the render, at best its going to slow the render down. At worst, something is going to crash. Most likely, everything will keep running but the game will run horribly.

    I can think of a few other cases where the result would be similar. Could be done, just not the best experience.

  • Ghosty12Ghosty12 Posts: 2,068

    Well the reviews for the 3080 are coming in and so far it seems to be good..

  • nicsttnicstt Posts: 11,715
    edited September 2020

    Reviews are starting to go up.

    Others are coming available as you'll likely notice if you open this up on Youtube

    Edit

    I'm not that impressed, but there is a definite and reasonable improvement. Considering what was said about the two RTX games, I suspect that for rendering the upgrade might be worth having, even for those with a 2080ti.

    There were no Blender comparisons, so off to see what other reviews offer.

    Post edited by nicstt on
  • nonesuch00nonesuch00 Posts: 18,320

    Yes, reading the reviews & specs I think I might save money until the multi-chiplet GPUs are ready in a couple of years & buy a GeForce RTX 3060 TI with 8 GB GDDR6 RAM. It looks like nVidia did this expressely to get folk clinging to their 10XX series nVidia GPUs to upgrade for less than $500.

  • nicsttnicstt Posts: 11,715

    I think this is the most balanced review; there are others with more raw data, which is useful for sure.

     

  • billyben_0077a25354billyben_0077a25354 Posts: 771
    edited September 2020

    For you old [H]ard-OCP fans her is Brent Justice's review from The FPS Review website

    https://www.thefpsreview.com/2020/09/16/nvidia-geforce-rtx-3080-founders-edition-review/

    Post edited by billyben_0077a25354 on
  • OMG! Best Buy already has the MSI RTX 3080 ion sale for $749.00?  They are already starting to price gouge.

  • nicsttnicstt Posts: 11,715

    Well the RRP is only valid if it is possible to buy it on a regular basis.

    ... We'll see how often (or how rarely) it turns out be regularly available.

  • All/most AIB cards will not sell at the FE price. Checking around it looks like either $739 or 749 is the MSRP of the Ventus.

    This does not look like price gouging. the ones selling for over $1000 on Newegg and Amazon are price gouging.

     

     

  • outrider42outrider42 Posts: 3,679

    3rd party prices are not surprising. Nvidia is able to control their costs better, after all, these are Nvidia designed chips. Obviously they sell them to 3rd parties for a profit. Then the AIBs need to design everything else, cooler and all. So yeah, look for typical prices to be anywhere from $50 to $100 more. Cards with more exotic cooling will be more than that.

    But $1000? yeah, that's some gouging there.

  • nicsttnicstt Posts: 11,715

    It doesn't matter if you have a low priced card if it's rarely in stock; I'll guess we'll see how often that occurs.

  • marblemarble Posts: 7,500

    If I'm at all representative, the rumours of a double VRAM version (3070 and 3080) might be creating a market in waiting. Or it could be that gamers don't care much about VRAM.

  • RayDAntRayDAnt Posts: 1,147

    *bangs head against wall*

    dedicated display cards are POINTLESS!

    Windows always reserves VRAM on every consumer grade video card as if it hard a monitor connected. It may even actually have the card try to output a signal. That is less clear. But there is clearly nothing to be gained by not having every Nvidia card installed selected for use in rendering.

    This myth that there is some benefit to having dedicated cards needs to die. People are wasting money and resources on it.

    Um guys....do you know that Windows 10 2004 update mostly fixes this issue?

    Now Windows 10 only reserves a flat 900 MB (that is Megabytes) of VRAM REGARDLESS of VRAM capacity. It no longer takes a percentage like it used to.

    On my 1080tis, they used to report having 9.1 GB available. But after updating to 2004, well, here it is straight from my Daz help file:

    2020-09-15 22:28:59.187 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1080 Ti): compute capability 6.1, 11.000 GiB total, 10.041 GiB available
    2020-09-15 22:28:59.189 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce GTX 1080 Ti): compute capability 6.1, 11.000 GiB total, 10.039 GiB available, display attached

    Boomshockalocka!

    As you can see, they now report over 10GB available. So you guys do not need to freak out over whether the 3090 has TCC mode or not. The 3090 should report 23GB of available VRAM. I think this is fair. It is certainly better than it was before, and you don't have to shell out extra cash for a Titan or Quadro just to get TCC mode and a single extra GB of VRAM.

    This is something that perhaps @RayDAnt could test with his Titan RTX to see how much VRAM it reports with 2004.

    And this also means that you are not quite correct, kenshaw. Because while Windows does reserve VRAM, 900MB of VRAM is probably less than the display GPU is using to push the Daz app and Windows. Plus this also neglects if somebody uses the Iray preview mode in the viewport. With a dedicated display GPU they can choose to use the display GPU for Iray preview, so the rendering GPU is not burdened by Iray preview.

    One last note, before some of you rush out to install 2004 if you haven't already, the update for 2004 is a big one and will take some time to install. This one was much longer than my previous updates. At one point I started to wonder if my PC was not going to reboot, there was a long period of time of like 10 minutes where the screen was totally black with no indication of anything going on. So give yourselves plenty of time for this update. But yes, this update is excellent news for anybody who uses rendering software.

    Can confirm that the latest Windows major update does indeed open up significantly more video memory to CUDA apps like Iray on my Titan RTX than seen previously.. Prior to updating, I was typically seeing something like this:

    2020-07-15 00:29:22.534 Iray [INFO] - IRAY:RENDER ::   1.1   IRAY   rend info : CUDA device 0 (TITAN RTX): compute capability 7.5, 24.000 GiB total, 20.069 GiB available, display attached

    Just popped open my log file, and I am now seeing this:

    2020-09-16 22:04:29.938 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (TITAN RTX): compute capability 7.5, 24.000 GiB total, 22.875 GiB available, display attached

    Which - while not exactly matching the 900MB figure you mention - is in the ballpark of of that amount and the observed amount on your 1080Ti (11GB per GPU) based system. This despite the fact that the Titan RTX is a more than double the VRAM capacity card. Backing up the claim that the WDDM VRAM usage penalty is no longer proportional.

     

    Furthermore, despite claims made by some the presence and - almost more importantly - configuration of displays physically connected to a specific GPU do indeed effect the amount of usable VRAM (rather than just theoretically available - which is what the Daz/Iray log file excerpts quoted above refer to) for user-initiated apps like Iray. You can demonstrate this for yourself by doing the following:

    1. Using a display connected to a phyiscal output on a discrete GPU, right-click the Desktop and select Display Settings.
    2. Open an instance of GPU-Z, select the Sensors tab, and select the same GPU from the dropdown list as that being used to drive the dispaly in question.
    3. Play around with changing the display resolution in Display Settings while keeping an eye on the "Memory Used" graph in the GPU-Z sensors tab and making no other changes to running apps or the like.

    What you will find is that the exact amount of VRAM used on that GPU changes with different display settings. This is because things like the WDDM framebuffer needed for each connected display exists solely in the VRAM of the GPU it is physically connected to. Meaning that there is sound logic (especially on system setups utilizing one or more high resolution 4K+ displays) to having "dedicated" display GPUs in a system where practical (rather than just theoretical) VRAM availiability is a priority.

  • There's been some grumbling about the drop from 11Gb to 10. However a lot of the knowledgeable tech reviewers have been pointing out that no game currently on the market uses 10Gb, I think the current highest consumption game is Red Dead 2 which at 4k with all the texture settings maxed uses about 6. But that will change if/when Cyberpunk launches. But even that doesn't say it will need more than 10 at max settings.

    So the only people who really want more than 10 are creators, more VRAM is great for video editing and other tasks beyond iRay, and less well informed consumers who just want big numbers.

    A lot of people seem to think Nvidia will do another launch before Xmas but I think that's unlikely. There's just no time and likely no chips. AMD's big Navi announcement will be late in Oct. so even if there is something that Nvidia wants to respond to it would be hard to get a new product on shelves in time for the holiday unless they just have it waiting to go on spec. 

  • joseftjoseft Posts: 310

    There's been some grumbling about the drop from 11Gb to 10. However a lot of the knowledgeable tech reviewers have been pointing out that no game currently on the market uses 10Gb, I think the current highest consumption game is Red Dead 2 which at 4k with all the texture settings maxed uses about 6. But that will change if/when Cyberpunk launches. But even that doesn't say it will need more than 10 at max settings.

    So the only people who really want more than 10 are creators, more VRAM is great for video editing and other tasks beyond iRay, and less well informed consumers who just want big numbers.

    A lot of people seem to think Nvidia will do another launch before Xmas but I think that's unlikely. There's just no time and likely no chips. AMD's big Navi announcement will be late in Oct. so even if there is something that Nvidia wants to respond to it would be hard to get a new product on shelves in time for the holiday unless they just have it waiting to go on spec. 

    Yes, i am of the mind that they already have these other cards, maybe not sitting there waiting to go on shelves, but developed and ready for mass production. Ready to respond to whatever AMD brings to the table, so they can press that button if AMD come out swinging, or sit back and soak up some revenue and release at a much later time if AMD cannot compete. It would be the smart thing to do, and given Nvidias growth over the past few years, i think they have definitely been in the habit of making smart decisions

  • RayDAntRayDAnt Posts: 1,147
    edited September 2020

    There's been some grumbling about the drop from 11Gb to 10. However a lot of the knowledgeable tech reviewers have been pointing out that no game currently on the market uses 10Gb, I think the current highest consumption game is Red Dead 2 which at 4k with all the texture settings maxed uses about 6. But that will change if/when Cyberpunk launches. But even that doesn't say it will need more than 10 at max settings.

    So the only people who really want more than 10 are creators, more VRAM is great for video editing and other tasks beyond iRay, and less well informed consumers who just want big numbers.

    A lot of people seem to think Nvidia will do another launch before Xmas but I think that's unlikely. There's just no time and likely no chips. AMD's big Navi announcement will be late in Oct. so even if there is something that Nvidia wants to respond to it would be hard to get a new product on shelves in time for the holiday unless they just have it waiting to go on spec. 

    Nvidia's decision to drop VRAM capacities ties directly into its decision to launch GPUDirect Storage (effectively a consumer-oriented spin on its decade-in-the-making enterprise level GPUDirect featureset). Assuming that the tech takes off (which it almost certainly will imo, given that all the new consoles are already set to use the same memory/storage pipelining) there should be no reason for anyone to even need significantly larger VRAM capacity consumer-oriented GPUs (unless - of course - they are attempting to run not-yet-GPUDirect Storage optimized professional apps like Iray on them...) Which is why I am personally not expecting to see eg. any 16GB/20GB 3070/3080 variants. Most likely ever.

    Post edited by RayDAnt on
  • RayDAnt said:

    There's been some grumbling about the drop from 11Gb to 10. However a lot of the knowledgeable tech reviewers have been pointing out that no game currently on the market uses 10Gb, I think the current highest consumption game is Red Dead 2 which at 4k with all the texture settings maxed uses about 6. But that will change if/when Cyberpunk launches. But even that doesn't say it will need more than 10 at max settings.

    So the only people who really want more than 10 are creators, more VRAM is great for video editing and other tasks beyond iRay, and less well informed consumers who just want big numbers.

    A lot of people seem to think Nvidia will do another launch before Xmas but I think that's unlikely. There's just no time and likely no chips. AMD's big Navi announcement will be late in Oct. so even if there is something that Nvidia wants to respond to it would be hard to get a new product on shelves in time for the holiday unless they just have it waiting to go on spec. 

    Nvidia's decision to drop VRAM capacities ties directly into its decision to launch GPUDirect Storage (effectively a consumer-oriented spin on its decade-in-the-making enterprise level GPUDirect featureset). Assuming that the tech takes off (which it almost certainly will imo, given that all the new consoles are already set to use the same memory/storage pipelining) there should be no reason for anyone to even need significantly larger VRAM capacity consumer-oriented GPUs (unless - of course - they are attempting to run not-yet-GPUDirect Storage optimized professional apps like Iray on them...) Which is why I am personally not expecting to see eg. any 16GB/20GB 3070/3080 variants. Most likely ever.

    Maybe. I'll see that tech finally when the A100 rack gets delivered next year. But that is not what is in the Playstation. The Playstation is not doing anything like that. It's off the shelf HW. With a custom firmware to do what seems to boil down to some sort of HW level compression and prefetch. But until the thing is actually released no one is going to be sure because the explanations have been so muddled.

  • I usually buy my new GPU as soon as possible, so that I get the most time with it, but this time around I'm not so sure. 3090 sure looks nice, but that is probably going to be near 2000 €, and I'd probably need a new PSU also. 3080 looks really nice too, but I'm not so thrilled to decrease my GPU memory....even 1 Gb. I think this time I'll wait for the AMD Big Navi launch, and hope team red comes with a big surprise, which hopefully forces Nvidia to release 3080ti/super or something early with more memory. If AMD can't deliver, my 2080ti is still a good card for rendering and I might just skip this generation.

  • rrwardrrward Posts: 556
    edited September 2020

    *bangs head against wall*

    dedicated display cards are POINTLESS!

    No, they are not. They allow you to use your computer for other tasks while your render is going without having to sacrifice either render speed or Windows responce time. You don't need a monster card for your display, I've used 1060s and 1070s at 4K for the task and they work just fine.

    Post edited by rrward on
  • rrwardrrward Posts: 556
    Mendoman said:

    I usually buy my new GPU as soon as possible, so that I get the most time with it, but this time around I'm not so sure. 3090 sure looks nice, but that is probably going to be near 2000 €, and I'd probably need a new PSU also. 3080 looks really nice too, but I'm not so thrilled to decrease my GPU memory....even 1 Gb. I think this time I'll wait for the AMD Big Navi launch, and hope team red comes with a big surprise, which hopefully forces Nvidia to release 3080ti/super or something early with more memory. If AMD can't deliver, my 2080ti is still a good card for rendering and I might just skip this generation.

    I got seriolsy burned when the 10x0 line came out and neither MacOS (which I was using at the time) nor Studio (which I wasn't) or Octane (which I tried to learn) could use either for almost a year, forcing me to switch back to Windows. Then the 20x0 cards came out and Studio couldn't use those for a time (a lot less than the 10x0 update, but still). So, I wait until the software I need them for has been demonstrated to work. I hated having that 1080ti sitting on a shelf for a year.

  • RayDAnt said:

    There's been some grumbling about the drop from 11Gb to 10. However a lot of the knowledgeable tech reviewers have been pointing out that no game currently on the market uses 10Gb, I think the current highest consumption game is Red Dead 2 which at 4k with all the texture settings maxed uses about 6. But that will change if/when Cyberpunk launches. But even that doesn't say it will need more than 10 at max settings.

    So the only people who really want more than 10 are creators, more VRAM is great for video editing and other tasks beyond iRay, and less well informed consumers who just want big numbers.

    A lot of people seem to think Nvidia will do another launch before Xmas but I think that's unlikely. There's just no time and likely no chips. AMD's big Navi announcement will be late in Oct. so even if there is something that Nvidia wants to respond to it would be hard to get a new product on shelves in time for the holiday unless they just have it waiting to go on spec. 

    Nvidia's decision to drop VRAM capacities ties directly into its decision to launch GPUDirect Storage (effectively a consumer-oriented spin on its decade-in-the-making enterprise level GPUDirect featureset). Assuming that the tech takes off (which it almost certainly will imo, given that all the new consoles are already set to use the same memory/storage pipelining) there should be no reason for anyone to even need significantly larger VRAM capacity consumer-oriented GPUs (unless - of course - they are attempting to run not-yet-GPUDirect Storage optimized professional apps like Iray on them...) Which is why I am personally not expecting to see eg. any 16GB/20GB 3070/3080 variants. Most likely ever.

    Is it a drop or is it 2gb more? This is not a ti version. It's price is comparable to the 2080 which has 8gb.
  • bluejauntebluejaunte Posts: 1,923
    RayDAnt said:

    What you will find is that the exact amount of VRAM used on that GPU changes with different display settings. This is because things like the WDDM framebuffer needed for each connected display exists solely in the VRAM of the GPU it is physically connected to. Meaning that there is sound logic (especially on system setups utilizing one or more high resolution 4K+ displays) to having "dedicated" display GPUs in a system where practical (rather than just theoretical) VRAM availiability is a priority.

    Still wouldn't make any sense to let the display one disabled IMO since when you exceed the VRAM of one card, Iray will render only on the second one. So you might as well have both enabled and let Iray do its thing. Of course this would be such an edge case to begin with, where let's say the scene exceeds the 11GB minus what little the display needs of the above mentioned 1080 TI, but not whatever other card with similiar VRAM. Or if your second card has massively more than you probably never need to worry about what little is reserved for the displays anyway.

    Some dirt cheap card for the displays? Maybe, I don't know it still doesn't sound like having a second card just for that makes much sense. You won't have the speed of your fast GPU for anything else that's GPU accelerated. Hassle of a second video card also, just for a tiny bit more VRAM while rendering?

    To each their own I guess. Not for me, but then again I game as well and need a fast GPU in Mari, Photoshop etc.

  • RayDAnt said:

    What you will find is that the exact amount of VRAM used on that GPU changes with different display settings. This is because things like the WDDM framebuffer needed for each connected display exists solely in the VRAM of the GPU it is physically connected to. Meaning that there is sound logic (especially on system setups utilizing one or more high resolution 4K+ displays) to having "dedicated" display GPUs in a system where practical (rather than just theoretical) VRAM availiability is a priority.

    Still wouldn't make any sense to let the display one disabled IMO since when you exceed the VRAM of one card, Iray will render only on the second one. So you might as well have both enabled and let Iray do its thing. Of course this would be such an edge case to begin with, where let's say the scene exceeds the 11GB minus what little the display needs of the above mentioned 1080 TI, but not whatever other card with similiar VRAM. Or if your second card has massively more than you probably never need to worry about what little is reserved for the displays anyway.

    Some dirt cheap card for the displays? Maybe, I don't know it still doesn't sound like having a second card just for that makes much sense. You won't have the speed of your fast GPU for anything else that's GPU accelerated. Hassle of a second video card also, just for a tiny bit more VRAM while rendering?

    To each their own I guess. Not for me, but then again I game as well and need a fast GPU in Mari, Photoshop etc.

    Some people just cannot be dissuaded from their myths.

  • nicsttnicstt Posts: 11,715

     

  • nicsttnicstt Posts: 11,715
    RayDAnt said:

    What you will find is that the exact amount of VRAM used on that GPU changes with different display settings. This is because things like the WDDM framebuffer needed for each connected display exists solely in the VRAM of the GPU it is physically connected to. Meaning that there is sound logic (especially on system setups utilizing one or more high resolution 4K+ displays) to having "dedicated" display GPUs in a system where practical (rather than just theoretical) VRAM availiability is a priority.

    Still wouldn't make any sense to let the display one disabled IMO since when you exceed the VRAM of one card, Iray will render only on the second one. So you might as well have both enabled and let Iray do its thing. Of course this would be such an edge case to begin with, where let's say the scene exceeds the 11GB minus what little the display needs of the above mentioned 1080 TI, but not whatever other card with similiar VRAM. Or if your second card has massively more than you probably never need to worry about what little is reserved for the displays anyway.

    Some dirt cheap card for the displays? Maybe, I don't know it still doesn't sound like having a second card just for that makes much sense. You won't have the speed of your fast GPU for anything else that's GPU accelerated. Hassle of a second video card also, just for a tiny bit more VRAM while rendering?

    To each their own I guess. Not for me, but then again I game as well and need a fast GPU in Mari, Photoshop etc.

    Some people just cannot be dissuaded from their myths.

    Indeed

  • PerttiAPerttiA Posts: 10,024
    RayDAnt said:

    What you will find is that the exact amount of VRAM used on that GPU changes with different display settings. This is because things like the WDDM framebuffer needed for each connected display exists solely in the VRAM of the GPU it is physically connected to. Meaning that there is sound logic (especially on system setups utilizing one or more high resolution 4K+ displays) to having "dedicated" display GPUs in a system where practical (rather than just theoretical) VRAM availiability is a priority.

    Still wouldn't make any sense to let the display one disabled IMO since when you exceed the VRAM of one card, Iray will render only on the second one. So you might as well have both enabled and let Iray do its thing. Of course this would be such an edge case to begin with, where let's say the scene exceeds the 11GB minus what little the display needs of the above mentioned 1080 TI, but not whatever other card with similiar VRAM. Or if your second card has massively more than you probably never need to worry about what little is reserved for the displays anyway.

    Some dirt cheap card for the displays? Maybe, I don't know it still doesn't sound like having a second card just for that makes much sense. You won't have the speed of your fast GPU for anything else that's GPU accelerated. Hassle of a second video card also, just for a tiny bit more VRAM while rendering?

    To each their own I guess. Not for me, but then again I game as well and need a fast GPU in Mari, Photoshop etc.

    Not forgetting the problems that can be expected with having the drivers for the "dirt cheap" card in the system...

  • 3rd party prices are not surprising. Nvidia is able to control their costs better, after all, these are Nvidia designed chips. Obviously they sell them to 3rd parties for a profit. Then the AIBs need to design everything else, cooler and all. So yeah, look for typical prices to be anywhere from $50 to $100 more. Cards with more exotic cooling will be more than that.

    But $1000? yeah, that's some gouging there.

     

    Also the cards like the ASUS TUF Gaming is faster and quieter out of the box than the NVIDIA FE. 

  • Absolute madness on scan.co.uk. Claiming that prices start at £649 but the cheapest is £710. People putting them in baskets and finding the price has jumped up by up to £70 Free shipping advertised but turns out to be £11 The tech world has gone nutts
  • droidy001 said:
    Absolute madness on scan.co.uk. Claiming that prices start at £649 but the cheapest is £710. People putting them in baskets and finding the price has jumped up by up to £70 Free shipping advertised but turns out to be £11 The tech world has gone nutts

    Not familiar with the site but Amazon has third party sellers playing games with prices and terms apparently.

Sign In or Register to comment.