GeForce 5090 - 32GB VRAM @ $2000

ArgleSWArgleSW Posts: 147
edited January 7 in The Commons

https://www.nvidia.com/en-me/geforce/graphics-cards/50-series/rtx-5090

 

It was just officially announced. Apparently it has twice the performance of a 4090. Also has 32 GB Vram confirmed. 

Post edited by ArgleSW on

Comments

  • dohdoh Posts: 1

    That means 2500-3000€ in Germany for AIB models... not to mention 575 watts. Kind of disappointing!!

  • Richard HaseltineRichard Haseltine Posts: 101,965

    Interesting that the performance graphs include one labelled DS Render - though since it specifies other settings under it I am not sure it means Daz Studio (a big jump, in that scene, if it does though).

  • MasterstrokeMasterstroke Posts: 2,023

    A 5070 TI with 16 GB VRAM would be affordable and good enough for me.

  • RaukoRauko Posts: 37
    edited January 7

    Richard Haseltine said:

    IInteresting that the performance graphs include one labelled DS Render - though since it specifies other settings under it I am not sure it means Daz Studio (a big jump, in that scene, if it does though).

    It actually says "D5 Render"

     

    5090.jpg
    1742 x 988 - 151K
    Post edited by Rauko on
  • HavosHavos Posts: 5,391

    I assume it means this: https://www.d5render.com/

    I don't know anything about it, but it is a real time renderer with AI features.

  • ExpozuresExpozures Posts: 232

    Really guessing that we probably won't notice a jaw-dropping performance increase.  Maybe a significant one, but maybe not anything that will make you run out and sell your 4090 for a 5090.

    @Rauko, do you use any denoising when you do your renders?  Based on your images, it doesn't look like you do.  You mentioned that you saw a huge gain between the 3090 and the 4090?  I went from a 3070 to a 4090 and saw a massive uplift, though that's apples to bananas. :) 

    The denoiser does use the tensor cores which utilize nVidia's AI tech to "fill in the gaps".  Jensen said that they were able to render something like 38 million pixels form just 2 million.

    The gains are going to be also the faster memory.  I don't think more memory will affect much, however, I did notice some struggling if I was using characters with 8k textures.  With standard Gen 8.1 models, on my 4090, I could get like a dozen Vickies' on screen with a scene and have no issues with filling up my VRAM.

  • MasterstrokeMasterstroke Posts: 2,023

    Havos said:

    I assume it means this: https://www.d5render.com/

    I don't know anything about it, but it is a real time renderer with AI features.

    Hell, yes!
    You are absolutety right.
    "It's not an S"
    It is d5 render, NOT DS render.

  • I think twice the performance refers to DLSS 4 in games.  I very much doubt it's twice in rasterisation or ray tracing, i.e. the fastest frame to render is the one you don't, if you see what I mean.  Anyway $1,000 for a 5080!  I paid £1,200 for my 4080.  Still 16Gb which is a pity.  5090 is out of my price range.  Whether a 5080 is worth it over the 4080 depends upon ray tracing performance I suppose.

  • RaukoRauko Posts: 37
    edited January 7

    Expozures said:

    @Rauko, do you use any denoising when you do your renders?  Based on your images, it doesn't look like you do.  You mentioned that you saw a huge gain between the 3090 and the 4090?  I went from a 3070 to a 4090 and saw a massive uplift, though that's apples to bananas. :) 

    I very rarely use the denoiser except when doing a mass of renders quickly (ie, like an animation) or in very specific circumstances. I found you don't really need it with the 4090 when you can whack out 1000s of iterations and count them in minutes.

    Yeah, performance increase was about 60-65% .. but I think things have slowed down a bit now due to Daz / Iray updates although that's more just a subjective feeling and not a hard objective fact. If the 5090 shows the same performance increase then it may be worth it .. but there will come a point in future generations where diminishing returns will kick in and paying $2000 will only be getting a few seconds speed increase .. not quite there yet, but in a couple of generations, maybe

     

    Post edited by Rauko on
  • Richard HaseltineRichard Haseltine Posts: 101,965

    Masterstroke said:

    Havos said:

    I assume it means this: https://www.d5render.com/

    I don't know anything about it, but it is a real time renderer with AI features.

    Hell, yes!
    You are absolutety right.
    "It's not an S"
    It is d5 render, NOT DS render.

    Thanks, it did seem unlikely that it was "our" DS even if I had been reading it correctly.

  • Richard HaseltineRichard Haseltine Posts: 101,965

    Expozures said:

    Really guessing that we probably won't notice a jaw-dropping performance increase.  Maybe a significant one, but maybe not anything that will make you run out and sell your 4090 for a 5090.

    You've said this in a couple of threads, but I am not clear how you are reaching the conclusion. It seems premature to draw any conclusions on this.

    @Rauko, do you use any denoising when you do your renders?  Based on your images, it doesn't look like you do.  You mentioned that you saw a huge gain between the 3090 and the 4090?  I went from a 3070 to a 4090 and saw a massive uplift, though that's apples to bananas. :) 

    The denoiser does use the tensor cores which utilize nVidia's AI tech to "fill in the gaps".  Jensen said that they were able to render something like 38 million pixels form just 2 million.

    The gains are going to be also the faster memory.  I don't think more memory will affect much, however, I did notice some struggling if I was using characters with 8k textures.  With standard Gen 8.1 models, on my 4090, I could get like a dozen Vickies' on screen with a scene and have no issues with filling up my VRAM.

  • nonesuch00nonesuch00 Posts: 18,240

    It's on my list for June 2026. Unless the 6090 will be out in 4th quarter 2026 / 1st quarter 2027. At $2000 an additional 6 month wait for the 6090 makes sense since I definately have other things to get done before a new video card, no matter how good the video card.

  • I have a question for all of you who have powerful computers.

    If I want to use dForce Strand-Based for animation, what class of computer should I buy?

    I am currently using a 2060 laptop and it is quite heavy!

  • kyoto kidkyoto kid Posts: 41,165

    ...hmmm 32 GB VRAM and over 21,0000 cores for 2,000 USD.  That is less than half the price a 32 GB RTX 5000 Ada, but the downside just over twice the power draw (560 W v. 250 for the RTX 5000) and a minimum system requirement og 1,000 W.(the 5000 has a minimum system PSU requirement of 600 W).

    One of the big  nice improvements is it is no longer a "brick" as it's a dual not triple slot card and is over an inch shorter than the 4090. 

    The one question, would it be compatible with a PCIE4.0 slot or does it require PCIe 5.0?

    The more VRAM, the less chance of dumping to the CPU which translates to a "speed advantage"

  • JamesJames Posts: 1,080

    Hei, does the AI stuffs in 5000 series has any advantage in DAZ rendering and general work?

  • ExpozuresExpozures Posts: 232

    Rauko said:

    Expozures said:

    @Rauko, do you use any denoising when you do your renders?  Based on your images, it doesn't look like you do.  You mentioned that you saw a huge gain between the 3090 and the 4090?  I went from a 3070 to a 4090 and saw a massive uplift, though that's apples to bananas. :) 

    I very rarely use the denoiser except when doing a mass of renders quickly (ie, like an animation) or in very specific circumstances. I found you don't really need it with the 4090 when you can whack out 1000s of iterations and count them in minutes.

    Yeah, performance increase was about 60-65% .. but I think things have slowed down a bit now due to Daz / Iray updates although that's more just a subjective feeling and not a hard objective fact. If the 5090 shows the same performance increase then it may be worth it .. but there will come a point in future generations where diminishing returns will kick in and paying $2000 will only be getting a few seconds speed increase .. not quite there yet, but in a couple of generations, maybe

    Pretty much, yeah.  As an IT tech at a company, I do see that.  Give a user a shiny new system and "OMG it's so fast!"  6 months later it's, "Hey, this is kinda slow."  No...it's the same speed that it was 6 months ago, you just got used to it. LOL

    My personal opinion to anyone with a 4090 is skip this gen.  The difference, I don't think, would justify a $2000USD investment.  But, if there's people out there who have an older card, or want to step up their game and move ot an xx90 card, wait a couple weeks till the 5090s come out.

  • ExpozuresExpozures Posts: 232

    Richard Haseltine said:

    Expozures said:

    Really guessing that we probably won't notice a jaw-dropping performance increase.  Maybe a significant one, but maybe not anything that will make you run out and sell your 4090 for a 5090.

    You've said this in a couple of threads, but I am not clear how you are reaching the conclusion. It seems premature to draw any conclusions on this.

    @Rauko, do you use any denoising when you do your renders?  Based on your images, it doesn't look like you do.  You mentioned that you saw a huge gain between the 3090 and the 4090?  I went from a 3070 to a 4090 and saw a massive uplift, though that's apples to bananas. :) 

    The denoiser does use the tensor cores which utilize nVidia's AI tech to "fill in the gaps".  Jensen said that they were able to render something like 38 million pixels form just 2 million.

    The gains are going to be also the faster memory.  I don't think more memory will affect much, however, I did notice some struggling if I was using characters with 8k textures.  With standard Gen 8.1 models, on my 4090, I could get like a dozen Vickies' on screen with a scene and have no issues with filling up my VRAM.

    Just my experience with working with technology in my field.  Generation over generation often sees only small improvements.  10th gen CPUs weren't much faster than 9th, 11th not much faster than 10th, and so on.  Same in the video card market.  One step up between generations often doesn't see huge, jawdropping gains.  You see more efficiencies, and better tuning, but over all there's no huge leaps and bounds difference.  My 1660Ti wasn't much slower than a 2060, so I passed on that.  I got a 3070 and saw a huge boost because it was 2 generations newer, and RTX technology had matured enough that it was more worth while.  I did see a jump from the 3070 to the 4090 simply because I had lots more VRAM to work with.  But, if you look at the benchmarks on the site, you'll see even the 4090 isn't *hugely* faster than the 3090.  It is, but not something that will make everyone run out and buy one.  People who had a 2090, though, would have noticed huge gains with the 4090.

    In my industry, I work a lot in the CAD/CAM environment, and I'm very familiar with particularly the Quadro line of cards.  With the software we use, 90% of my users really don't see a huge difference if I swap a Quadro K2000 with a Quadro P2000.  Same specs, just the P is 3 generations newer than the K and it really doesn't affect the performance of even enterprise-level CAD/CAM software.  What they do see is when the models get larger and we go from 4GB cards to 8GB cards that the models load faster.

    And based on Jensen's keynote speech, most of the architecture development went into Tensor and not CUDA.  CUDA is what handles the raytracing aspect.  Granted, it is what does do the heavy lifting of image display.  Tensor cores are the AI architecture underneath.  Tensor cores are way faster than CUDA cores, but are much more prone to producing errors.  This is what's used when you use denoiser.  The AI in the tensor cores calculate and predict the pixels around the generated pixels, which is why images can look flat or smudged if you set the denoiser too low.

  • ExpozuresExpozures Posts: 232

    kyoto kid said:

    ...hmmm 32 GB VRAM and over 21,0000 cores for 2,000 USD.  That is less than half the price a 32 GB RTX 5000 Ada, but the downside just over twice the power draw (560 W v. 250 for the RTX 5000) and a minimum system requirement og 1,000 W.(the 5000 has a minimum system PSU requirement of 600 W).

    One of the big  nice improvements is it is no longer a "brick" as it's a dual not triple slot card and is over an inch shorter than the 4090. 

    The one question, would it be compatible with a PCIE4.0 slot or does it require PCIe 5.0?

    The more VRAM, the less chance of dumping to the CPU which translates to a "speed advantage"

    That's one thing I liked about the Quadro series is their much lower power consumption.  I can run a 16GB Quadro RTX card just on the 75W power from the PCIe slot.  Granted, when it comes to gaming it's no fun.  My laptop has a Quadro RTX 3000 video card in it, and I tried Daz and my 1660Ti is better.

  • Richard HaseltineRichard Haseltine Posts: 101,965

    Expozures said:

    Richard Haseltine said:

    Expozures said:

    Really guessing that we probably won't notice a jaw-dropping performance increase.  Maybe a significant one, but maybe not anything that will make you run out and sell your 4090 for a 5090.

    You've said this in a couple of threads, but I am not clear how you are reaching the conclusion. It seems premature to draw any conclusions on this.

    @Rauko, do you use any denoising when you do your renders?  Based on your images, it doesn't look like you do.  You mentioned that you saw a huge gain between the 3090 and the 4090?  I went from a 3070 to a 4090 and saw a massive uplift, though that's apples to bananas. :) 

    The denoiser does use the tensor cores which utilize nVidia's AI tech to "fill in the gaps".  Jensen said that they were able to render something like 38 million pixels form just 2 million.

    The gains are going to be also the faster memory.  I don't think more memory will affect much, however, I did notice some struggling if I was using characters with 8k textures.  With standard Gen 8.1 models, on my 4090, I could get like a dozen Vickies' on screen with a scene and have no issues with filling up my VRAM.

    Just my experience with working with technology in my field.  Generation over generation often sees only small improvements.  10th gen CPUs weren't much faster than 9th, 11th not much faster than 10th, and so on.  Same in the video card market.  One step up between generations often doesn't see huge, jawdropping gains.  You see more efficiencies, and better tuning, but over all there's no huge leaps and bounds difference.  My 1660Ti wasn't much slower than a 2060, so I passed on that.  I got a 3070 and saw a huge boost because it was 2 generations newer, and RTX technology had matured enough that it was more worth while.  I did see a jump from the 3070 to the 4090 simply because I had lots more VRAM to work with.  But, if you look at the benchmarks on the site, you'll see even the 4090 isn't *hugely* faster than the 3090.  It is, but not something that will make everyone run out and buy one.  People who had a 2090, though, would have noticed huge gains with the 4090.

    In my industry, I work a lot in the CAD/CAM environment, and I'm very familiar with particularly the Quadro line of cards.  With the software we use, 90% of my users really don't see a huge difference if I swap a Quadro K2000 with a Quadro P2000.  Same specs, just the P is 3 generations newer than the K and it really doesn't affect the performance of even enterprise-level CAD/CAM software.  What they do see is when the models get larger and we go from 4GB cards to 8GB cards that the models load faster.

    And based on Jensen's keynote speech, most of the architecture development went into Tensor and not CUDA.  CUDA is what handles the raytracing aspect.  Granted, it is what does do the heavy lifting of image display.  Tensor cores are the AI architecture underneath.  Tensor cores are way faster than CUDA cores, but are much more prone to producing errors.  This is what's used when you use denoiser.  The AI in the tensor cores calculate and predict the pixels around the generated pixels, which is why images can look flat or smudged if you set the denoiser too low.

    RTX, surely, is doing a lot of the ray-trace work - Tensor cores are used by the denoiser, and the newer test for completion too as I recall.

  • robertswwwrobertswww Posts: 793

    RTX 5090 - From the specs, it looks like a Beast!
    Starting at $1,999

    Key Features/Specs:
    Blackwell Architecture
    32 GB GDDR7 GPU Memory
    512-bit Memory Interface
    1,792 GB/sec Memory Bandwidth
    318 TFLOPS 4th Gen Ray Tracing Cores
    21,760 Cuda Cores
    1,000 Watt System Power
    575 Watts Total Graphics Power
    1x 600 W PCIe Gen 5 cable
    Recommend PCIe CEM 5.1 compliant PSU
    PCIe Gen 5
    2-Slot

    Sources:
    https://www.nvidia.com/en-us/geforce/graphics-cards/50-series/
    https://www.nvidia.com/en-us/geforce/graphics-cards/compare/

     

    But DLSS is just the beginning.

    We've integrated neural networks inside of programmable shaders to create neural shaders. RTX Neural Shaders will drive the next decade of graphics innovations. They can be used to compress textures by up to 7X, saving massive amounts of graphics memory. And be used to create cinematic-quality textures, and even more advanced lighting effects in games.

    The NVIDIA RTX Blackwell architecture has been built and optimized for neural rendering. It has a massive amount of processing power, with new engines and features specifically designed to accelerate the next generation of neural rendering.

    With up to 92 billion transistors, Blackwell is the most powerful consumer GPU ever created. The Blackwell streaming multiprocessor (SM) has been updated with more processing throughput, and a tighter integration with the Tensor Cores in order to optimize the performance of neural shaders. Blackwell is enhanced by several hardware and software innovations to improve Shader Execution Reordering. The reorder logic is twice as efficient, increasing the speed and precision of reordering which accelerates the performance of neural shaders. 

    Blackwell has also been enhanced with PCIe Gen5 and DisplayPort 2.1b UHBR20, driving displays up to 8K 165Hz.

    And in order to feed all this processing power, Blackwell is equipped with the world's fastest memory - GDDR7 with speeds up to 30Gbps. With G7 memory, Blackwell GPUs can deliver up to 1.8TB/s of memory bandwidth.
    Source:
    https://www.nvidia.com/en-us/geforce/news/rtx-50-series-graphics-cards-gpu-laptop-announcements/

  • kyoto kidkyoto kid Posts: 41,165

    Expozures said:

    kyoto kid said:

    ...hmmm 32 GB VRAM and over 21,0000 cores for 2,000 USD.  That is less than half the price a 32 GB RTX 5000 Ada, but the downside just over twice the power draw (560 W v. 250 for the RTX 5000) and a minimum system requirement og 1,000 W.(the 5000 has a minimum system PSU requirement of 600 W).

    One of the big  nice improvements is it is no longer a "brick" as it's a dual not triple slot card and is over an inch shorter than the 4090. 

    The one question, would it be compatible with a PCIE4.0 slot or does it require PCIe 5.0?

    The more VRAM, the less chance of dumping to the CPU which translates to a "speed advantage"

    That's one thing I liked about the Quadro series is their much lower power consumption.  I can run a 16GB Quadro RTX card just on the 75W power from the PCIe slot.  Granted, when it comes to gaming it's no fun.  My laptop has a Quadro RTX 3000 video card in it, and I tried Daz and my 1660Ti is better.

    ...were it not for the higher cost I would prefer a Quadro grade  card as I am not into gaming. The two things that attract me are the lower power consumption and higher VRAM.offerings

    Sadly a 32 GB RTX 5000 Ada is about twice the price of the 5090 (about 4,200 USD at Newegg) even though it has the same maximum TDP of my old Maxwell Titan-X (meaning I don't have to get a new PSU) along with the same physical dimensions (5090 Founders Edition only all others are triple and even quadruple slot)

    AS a compromise I have been considering either the RTX 4000 Ada or RTX A4500, both which have with 20 GB GDDR6  The A4500 has slightly higher core counts, but the difference between the two (particularly given efficiency advancements of Ada generation technology) is relatively minimal so it is more of a price issue  Both also have a lower TDP compared to my trusty old Titan-X (the A4500 has a TDP of 200 W and the 4000 Ada 130 W)

    I also prefer the "blower" design" as it exhausts out the back of the case instead of into it.

  • JPJP Posts: 76

    kyoto kid said:

    ...hmmm 32 GB VRAM and over 21,0000 cores for 2,000 USD.  That is less than half the price a 32 GB RTX 5000 Ada, but the downside just over twice the power draw (560 W v. 250 for the RTX 5000) and a minimum system requirement og 1,000 W.(the 5000 has a minimum system PSU requirement of 600 W).

    One of the big  nice improvements is it is no longer a "brick" as it's a dual not triple slot card and is over an inch shorter than the 4090. 

    The one question, would it be compatible with a PCIE4.0 slot or does it require PCIe 5.0?

    The more VRAM, the less chance of dumping to the CPU which translates to a "speed advantage"

    The power consumption of the GPU can be reduced via Afterburner.  I used to limit my 3090 to 73% power draw, overclock the memory by 1050 MHz, underclock the Core by -400 MHz, and set the fan speed at 80% when I mined Ethereum for years. I am still using the 3090 without issues. It's the discontinued EVGA brand and still rock solid. Amazing build quality. I need to check if my Ethereum settings will improve IRAY rendering times. The GPU temps don't go as high when rendering a scene in DAZ so I don't think it draws as much power as when I mined Ethereum.

    Power usage can be confirmed via command line:

    nvidia-smi.exe -i 0 --loop-ms=1000 --format=csv,noheader --query-gpu=power.draw

    Upscaling videos with TOPAZ will heat the GPU up more than IRAY in my experience.

Sign In or Register to comment.