Nvidia Ampere (2080 Ti, etc. replacements) and other rumors...

1192022242545

Comments

  • Ghosty12Ghosty12 Posts: 2,068

    Watching this video is interesting it is pretty much all speculation, but it may all be up to what AMD release to decide if Nvidia release better cards sooner than later or something like that..

  • kyoto kidkyoto kid Posts: 41,257
    edited September 2020

    ...my question to the above is how do they do the current 16 GB RTX Quadro 5000 and 24 GB RTX Titan which are both dual slot cards without 2 GB memory chips?.

    Post edited by kyoto kid on
  • kyoto kid said:

    ...my question to the above is how do they do the current 16 GB RTX Quadro 5000 and 24 GB RTX Titan which are both dual slot cards without 2 GB memory chips?.

    There are 2 Gb and 4Gb VRAM chips. How do you think they fit 48 Gb on the RTX Quadro 8000 which has 48Gb and is still a 2 slot card?

    I didn't watch the above, he was annoying, did he imply there wasn't?

  • outrider42outrider42 Posts: 3,679

    The 3090 will not have the validation that the Quadros will have. It might work fine in stuff like Solidworks, it almost certainly will, but unless the card is validated there are companies that will not use it. Some companies won't even give you CS unless the hardware you're using is validated.

    That's why Nvidia could sell the RTX Titan and the RTX Quadro 6000 last gen. Same amount of VRAM and CUDA but almost double the price for the Quadro.

    People get fixated on the top of the stack Quadros but there are generally 4 or more Quadros in the line and they match up to the consumer stack, they use functionally the same chips. They generally down clock slightly, use standard GDDR instead of GDDRX VRAM (the GDDRX VRAM is only made by Micron and is not part of the GDDR standard and that makes validating it a no go), and come with more VRAM (usually) for a lot more money.

    I'll just point out that Nvidia's Quadro growth has slowed tremendously the past couple years. Their most recent financial report states this concerning their professional market which concerns Quadro:

    • Second-quarter revenue was $203 million, down 34 percent from the previous quarter and down 30 percent from a year earlier.

    Meanwhile the gaming lineup did $1.65 billion, and the data center did $1.75 billion in this time frame and both were way up. And this is with Turing on its last legs. Gaming is still very much their main thing. Data center only recently surpassed it thanks to acquisitions like Mellanox. Now when you consider that gaming cards are just a fraction the cost of Quadro, but their sales did 8 times more revenue, that should give an idea of how many units are getting sold.

    Their growth has come from gaming and data center. Back in 2018 Quadro growth was already flat, so this has been a trend. Perhaps this could be another reason why the gaming lineup of Ampere is launching before Quadro. In contrast, the RTX Quadros launched months before Turing did.

     

    kyoto kid said:

    ...my question to the above is how do they do the current 16 GB RTX Quadro 5000 and 24 GB RTX Titan which are both dual slot cards without 2 GB memory chips?.

    The 3080 and 3090 are using GDDR6X, which is a new memory. It is so new that 2GB chips do not exist yet. But for last gen, they used regular GDDR6, and these chips were available in different sizes for the Quadros and Titan.

  • kyoto kidkyoto kid Posts: 41,257

    ...the catch word is "yet". 

    So that would mean were they to release them with double the memory using 1 GB chips those "Ti"/"Super" cards along with the Titan and Quadros will all need to be triple slot in width as well to accommodate the memory and cooling for it on both sides. 

  • The 3090 will not have the validation that the Quadros will have. It might work fine in stuff like Solidworks, it almost certainly will, but unless the card is validated there are companies that will not use it. Some companies won't even give you CS unless the hardware you're using is validated.

    That's why Nvidia could sell the RTX Titan and the RTX Quadro 6000 last gen. Same amount of VRAM and CUDA but almost double the price for the Quadro.

    People get fixated on the top of the stack Quadros but there are generally 4 or more Quadros in the line and they match up to the consumer stack, they use functionally the same chips. They generally down clock slightly, use standard GDDR instead of GDDRX VRAM (the GDDRX VRAM is only made by Micron and is not part of the GDDR standard and that makes validating it a no go), and come with more VRAM (usually) for a lot more money.

    I'll just point out that Nvidia's Quadro growth has slowed tremendously the past couple years. Their most recent financial report states this concerning their professional market which concerns Quadro:

    • Second-quarter revenue was $203 million, down 34 percent from the previous quarter and down 30 percent from a year earlier.

    Meanwhile the gaming lineup did $1.65 billion, and the data center did $1.75 billion in this time frame and both were way up. And this is with Turing on its last legs. Gaming is still very much their main thing. Data center only recently surpassed it thanks to acquisitions like Mellanox. Now when you consider that gaming cards are just a fraction the cost of Quadro, but their sales did 8 times more revenue, that should give an idea of how many units are getting sold.

    Their growth has come from gaming and data center. Back in 2018 Quadro growth was already flat, so this has been a trend. Perhaps this could be another reason why the gaming lineup of Ampere is launching before Quadro. In contrast, the RTX Quadros launched months before Turing did.

    I'm not sure how they segment the sales of Quadros. But I am fairly confident that that data center segment is a lot more than Mellanox. We have a lot of infiniband and ethernet adapters from them but IIRC a 1U infiniband switch only runs a couple of thousnad dollars. 

    Nvidia may not be selling a lot of Quadros as workstation cards but they are selling lots into the datacenter.

  • nonesuch00nonesuch00 Posts: 18,320

    I have a 2019 gamer model  HP Pavilion that is absolutely great quality. Same with the HP 8460P Elitebook that I fried doing DAZ renders and HP 8470P Elitebook that I didn't fry doing DAZ renders. I'm going to convert the HP8470P to a FreeBSD laptop. I guess I'll toss the HP8460P because to replace the MB in it is more than it's work otherwise it'd make a great Ubuntu laptop. So I can recommend HP / Compaq hardware but it cost noticably more quite often.

  • nicsttnicstt Posts: 11,715

    @billyben_0077a25354

    fair enough.

    kyoto kid said:
    nicstt said:
    Visuimag said:

     

    Ghosty12 said:

    Just watched this Moore's Law is Dead video on what is possibly coming and it is interesting.. It it mentioned that Nvidia is being a bit naughty and that AiB's are likely having to charge more for their versions of Ampere.. And that the main part of the video is about the new Quadro's and a possible Titan card next year..

    -vid

    And this is why I didn't rule out a new TITAN. The full fat wasn't unveiled yet. Not saying this is conformation of anything, but I fully expect a TITAN based on Ampere next year.

     

     

    The 3090 seems to be a Titan, which of course, given the name means the  could be a baby Titan (3090) and Titan Titan - with the Titan Titan being priced accordingly - and considering what was said at the presentation, named something else.

    ...a new Titan will need to offer something more than the 3090, like at least 32 GB or 48 GB of VRAM, possibly or HBM memory, as well as possibly a higher core count to make people want to shell out close to 2,500$ - 3,000$ or whatever it will end up priced at.

    This is the issue. If Nvidia is doing a 48GB Quadro, and the 3090 has 24GB, what purpose does a Titan serve with either one of these VRAM counts? If it has 24GB, but costs $1000+ more than a 3090, then it would have nothing to offer other than TCC mode and couple other features. As it is, we do not yet know if the 3090 has TCC or not. Though I will say that the name implies it does not. I know Jenson said this was a Titan type of card...OK...so why not call it a Titan then? It doesn't make any sense.

    And if they do a Titan with 48GB, then it competes with the rumored Quadro (and any Quadro, as 48GB is already top tier Quadro). A Titan will offer several key Quadro features, though it is important to note that it doesn't offer all Quadro features. Even so, it would be so close to the Quadro that this would not make any sense for most use cases.

    The other issue is the core counts. The rumored Quadro only has a couple hundred more cores than the 3090. The same would apply to the Titan as well, and this is because the 3090 is already so close to the full die. That is a tiny 2% difference in core counts. We need to remember that these Ampere CUDA cores are not quite the same as past CUDA cores. They are around 33% weaker (possibly more) when you factor the performance that Nvidia has shown us so far. So the Titan or Quadro would only be like 1% faster in actual performance. Depending on clock speeds, they might actually be slower! Quadros are often more conservative on clocks, so that could be the case.

    So then, where does a possible Titan even fit into the Ampere lineup?

    The only thing that would make sense to me would be splitting the difference and doing a 32GB card, or something in that range. I am not sure how doable that would be as it may need a different memory configuration.

    So this is why I wonder if Titan even gets released. If we do get a Titan, it may be quite far off, like a year. This would give Nvidia time to sell those expensive Quadros, and about one year would be logical time to put out a card that beats the Quadro.

    In other words, don't expect a Titan too soon. Maybe we get lucky and get one in the Spring, that would match the time frame for the 1080ti's release after the 1080. But the difference there was that Quadro Pascal also released near the 1080 and it had 24GB compared to the Titan XP's 12 and 1080ti's 11. So the Quadro released much earlier and it kept its place in the lineup. With Ampere, the Titan doesn't seem to have a good place yet.

    Yeh, many folks seem to be ignoring this or not noticing, or not caring because... No clue.

    I love the graphs showing TFlops. They're about as trustworthy as a double-glazing sales man (in the UK at least their reputation has not been the best over a few decades).

    Well the 3090 is the new Titan according to Nvidia, but well marketing.

  • stephenschoonstephenschoon Posts: 360
    edited September 2020

    Hi Guys,
    Currently running a pair of watercooled GTX1080Ti GPUs with a Core i7 7820X and 32GB RAM. Originally looked at a RTX3090 but it's a lot of money and do I really need 24GB VRAM...
    So now it'm thinking about a RTX3080, I know it's not a big jump in cuda cores but the RTX card efficiencies should still give my rendering a boost and...
    ...maybe spending the money I've saved on a new AMD motherboard and Ryzen 9 3900X. Could even reuse a GTX1080Ti to drive the display...

    What does the forum think ?
    Steve

     

    Post edited by stephenschoon on
  • nicsttnicstt Posts: 11,715

    1080ti for your desplays with a dedicated render card is, imo, the way to go.

  • stephenschoonstephenschoon Posts: 360
    edited September 2020
    nicstt said:

    1080ti for your desplays with a dedicated render card is, imo, the way to go.

    And because I'm not a gamer it wouldn't matter if the 1080Ti was in a x8 slot...

    Although looking at the AMD motherboards it would appear they can only drive one x16 slot at x16, the other x16 slot is driven at x4 unless you go to a threadripper TR4 motherboard but then the cheapest threadripper CPU is £1200...

     

    Post edited by stephenschoon on
  • *bangs head against wall*

    dedicated display cards are POINTLESS!

    Windows always reserves VRAM on every consumer grade video card as if it hard a monitor connected. It may even actually have the card try to output a signal. That is less clear. But there is clearly nothing to be gained by not having every Nvidia card installed selected for use in rendering.

    This myth that there is some benefit to having dedicated cards needs to die. People are wasting money and resources on it.

  • *bangs head against wall*

    dedicated display cards are POINTLESS!

    Windows always reserves VRAM on every consumer grade video card as if it hard a monitor connected. It may even actually have the card try to output a signal. That is less clear. But there is clearly nothing to be gained by not having every Nvidia card installed selected for use in rendering.

    This myth that there is some benefit to having dedicated cards needs to die. People are wasting money and resources on it.

    What if you have one card that is doing DAZ rendering and the other one you have excluded from rendering in DAZ... wouldn't that be ideal for actually being able to use the machine while it's rendering? 

    I'm not trying to agitate here, it's just what I've always been led to believe.  Honestly, I have two cards but the GTX 1660 is disabled in BIOS because I couldnt' get the Quadro to run correctly (driver conflicts) when I was using it, so I just decided to use the one for everything, but in the situation above I'd think you'd see a benefit from running a dedicated "display card".

  • GordigGordig Posts: 10,191

    *bangs head against wall*

    dedicated display cards are POINTLESS!

    Windows always reserves VRAM on every consumer grade video card as if it hard a monitor connected. It may even actually have the card try to output a signal. That is less clear. But there is clearly nothing to be gained by not having every Nvidia card installed selected for use in rendering.

    This myth that there is some benefit to having dedicated cards needs to die. People are wasting money and resources on it.

    Also not trying to stir the pot, but would you consider Titans "consumer grade"? If the 3090 ends up having TCC mode capability, would that not invalidate your statement?

  • nicsttnicstt Posts: 11,715
    nicstt said:

    1080ti for your desplays with a dedicated render card is, imo, the way to go.

    And because I'm not a gamer it wouldn't matter if the 1080Ti was in a x8 slot...

    Although looking at the AMD motherboards it would appear they can only drive one x16 slot at x16, the other x16 slot is driven at x4 unless you go to a threadripper TR4 motherboard but then the cheapest threadripper CPU is £1200...

     

    I use a Threadripper, but I'd put the render card in a x8 if I didn't have x16 available; I believe it doesn't make a difference over the actual render times, but only the initial data transfer; I could however be remembering wrong.

  • nicsttnicstt Posts: 11,715
    edited September 2020

    *bangs head against wall*

    dedicated display cards are POINTLESS!

    Windows always reserves VRAM on every consumer grade video card as if it hard a monitor connected. It may even actually have the card try to output a signal. That is less clear. But there is clearly nothing to be gained by not having every Nvidia card installed selected for use in rendering.

    This myth that there is some benefit to having dedicated cards needs to die. People are wasting money and resources on it.

    I disagree, as whilst some RAM is reserived, it isn't used for anything but rendering. Putting it in capitals doesn't make your emphasis any more correct, or even correct at all.

    It remains at idle, becuase there are no monitors are plugged into it.

    ... And stop banging your head against the wall, it must be giving you concussion.

    Post edited by nicstt on
  • Kenshaw's point about memory is well supported - we went through this over the amount of memory Windows 10 reserves, though there are other factors there. What isn't clear is the use of resoruces for any kind of acceleration (OpenGL, OpenCL, etc.) - we know that Iray doesn't automatically use all cards, it may well be that if one or more applications are actually doing more than sending raw data to the viewports then that is going to be specific to the card(s) used for their display (or whatever other card is set in their preferences, if appropriate).

  • *bangs head against wall*

    dedicated display cards are POINTLESS!

    Windows always reserves VRAM on every consumer grade video card as if it hard a monitor connected. It may even actually have the card try to output a signal. That is less clear. But there is clearly nothing to be gained by not having every Nvidia card installed selected for use in rendering.

    This myth that there is some benefit to having dedicated cards needs to die. People are wasting money and resources on it.

     

    Kenshaw's point about memory is well supported - we went through this over the amount of memory Windows 10 reserves, though there are other factors there. What isn't clear is the use of resoruces for any kind of acceleration (OpenGL, OpenCL, etc.) - we know that Iray doesn't automatically use all cards, it may well be that if one or more applications are actually doing more than sending raw data to the viewports then that is going to be specific to the card(s) used for their display (or whatever other card is set in their preferences, if appropriate).

    Man I miss Windows 7

  • nonesuch00nonesuch00 Posts: 18,320

    I am looking for a single slot Radeon RX 550 or better to put in my PCIe x16 v 2.0 (I think) slot for when I get an nVidia 30XX but modern (RX 450 or greater) single slot cards with 4GB RAM or more from AMD are hard to find and given the state of the newest GPUs they are extremely overpriced. In fact I've only found a couple and none more than 4GB RAM.

  • duckbomb said:

    *bangs head against wall*

    dedicated display cards are POINTLESS!

    Windows always reserves VRAM on every consumer grade video card as if it hard a monitor connected. It may even actually have the card try to output a signal. That is less clear. But there is clearly nothing to be gained by not having every Nvidia card installed selected for use in rendering.

    This myth that there is some benefit to having dedicated cards needs to die. People are wasting money and resources on it.

    What if you have one card that is doing DAZ rendering and the other one you have excluded from rendering in DAZ... wouldn't that be ideal for actually being able to use the machine while it's rendering? 

    I'm not trying to agitate here, it's just what I've always been led to believe.  Honestly, I have two cards but the GTX 1660 is disabled in BIOS because I couldnt' get the Quadro to run correctly (driver conflicts) when I was using it, so I just decided to use the one for everything, but in the situation above I'd think you'd see a benefit from running a dedicated "display card".

    If the card is disabled why is it plugged in? And again no. There is no benefit when all the cards are consumer grade. NONE.

    If you have a pro card or prosumer card that can be put in TCC mode and have the know how to do so then there is a benefit.

    I have 2 cards. I use both for rendering. I use my computer while rendering without issue. I think the people who have issues with using their systems while rendering are either using the CPU, which always pegs the CPU,or have very low spec GPU's (which likely means they fail over to CPU). That's the reason WDDM reserves VRAM in the first place.

  • bluejauntebluejaunte Posts: 1,923

    While you render on GPU in Iray, other GPU accelerated applications might be slower. Browsers use some GPU acceleration these days, though I never noticed any slown down. Nor would I care much. Obviously games would be slower while you render. but who does that? I use Mari which lags noticably during rendering, as it's a very GPU-heavy application. I tend not to do both at the same time, and it would definitely not be worth it to have a separate videocard for it. Waste of processing power, when I could just render faster on both GPUs.

    Overall it's absolutely not worth it to have a separate videocard for driving the displays, even less when you wanna use a 1080TI. I mean how decadent is it to use a powerful card like that with 11GB VRAM to just drive a display and nothing more?  laugh

  • PerttiAPerttiA Posts: 10,024

    *bangs head against wall*

    dedicated display cards are POINTLESS!

    Windows always reserves VRAM on every consumer grade video card as if it hard a monitor connected. It may even actually have the card try to output a signal. That is less clear. But there is clearly nothing to be gained by not having every Nvidia card installed selected for use in rendering.

    This myth that there is some benefit to having dedicated cards needs to die. People are wasting money and resources on it.

     

    Kenshaw's point about memory is well supported - we went through this over the amount of memory Windows 10 reserves, though there are other factors there. What isn't clear is the use of resoruces for any kind of acceleration (OpenGL, OpenCL, etc.) - we know that Iray doesn't automatically use all cards, it may well be that if one or more applications are actually doing more than sending raw data to the viewports then that is going to be specific to the card(s) used for their display (or whatever other card is set in their preferences, if appropriate).

    Man I miss Windows 7

    Fortunately nobody at home is forcing one to change wink
    At work I managed to fight back for two years, but at least I got the third monitor to lessen the pain of W10+Office365...

    At least somewhat related to the discussion about multiple GPU:s, the further you stay away from integrated graphics, the better your system works.

  • namffuaknamffuak Posts: 4,191

    *bangs head against wall*

    dedicated display cards are POINTLESS!

    Well, yes - and no. I have a GTX 980ti, a 1080ti, - and a GT740. The 740 drives my monitors (2 at 1920 X 1080) and is pathetic for Iray. It was also the only video card n the box when I started, before Iray. In the first couple of iterations of Iray pretty well ate the GPU, to the point that Win 7 solitaire was unplayable due to screen lag. That was resolved quite a while back, and I plan to add a 3080 to the mix; if Win 7 can find the monitors on the 740 when it relocates to slot 7 I'll keep it, otherwise the 980ti will inherit the monitors.- but not as a dedicated card.

  • So, the 3080 performs about as fast as the discontinued Radeon VII in mining apparently...

    https://wccftech.com/nvidia-geforce-rtx-3080-ethereum-crypto-mining-performance-leaks-out/

  • nicsttnicstt Posts: 11,715

    Kenshaw's point about memory is well supported - we went through this over the amount of memory Windows 10 reserves, though there are other factors there. What isn't clear is the use of resoruces for any kind of acceleration (OpenGL, OpenCL, etc.) - we know that Iray doesn't automatically use all cards, it may well be that if one or more applications are actually doing more than sending raw data to the viewports then that is going to be specific to the card(s) used for their display (or whatever other card is set in their preferences, if appropriate).

    I didn't dispute the RAM, indeed I acknowledged it.

    I have no control over what the system does with the cards, other than not plugging in monitors; the not plugging in monitors, however, can certainly be significant.#

    When rendering, I only use the dedicated card (and the CPU which in Blender contributes more). Continued checking of HWINFo64, GPUz show the render card at 0% useage, which can't be considered conclusive as I have no idea how accurate they are at reporting every possible use, and I don't monitor 24/7, but outside useage is certainly much lower than if I also used them for monitors. I see the monitor card being used for what appears to be a variety of functions, but not rendering.

    That is what a dedicated card is, why some seem to feel it isn't valid, I have no idea. Rendering doesn't affect viewport display by any discernable amount. They are as seperate as I can make them.

    ... ergo: it's a dedicated card.

  • kyoto kidkyoto kid Posts: 41,257
    edited September 2020
    nicstt said:

    1080ti for your desplays with a dedicated render card is, imo, the way to go.

    ...that's the setup I have on the  render system.

    Kenshaw's point about memory is well supported - we went through this over the amount of memory Windows 10 reserves, though there are other factors there. What isn't clear is the use of resoruces for any kind of acceleration (OpenGL, OpenCL, etc.) - we know that Iray doesn't automatically use all cards, it may well be that if one or more applications are actually doing more than sending raw data to the viewports then that is going to be specific to the card(s) used for their display (or whatever other card is set in their preferences, if appropriate).

    Man I miss Windows 7

    ...that's why I stayed with it and beefed up system security.  Windows 7's "footprint" on VRAM in almost negligible in comparison to 10 and that is important when you create epic level scenes like I tend to do.

    Post edited by kyoto kid on
  • kyoto kidkyoto kid Posts: 41,257

    ..ugh are the froum servers are slow today.  I actually got a popup while  posting an earlier comment that asked if I wanted to "remain on the site" or "leave" similar to FB.

  • TheKDTheKD Posts: 2,703
    PerttiA said:

    Fortunately nobody at home is forcing one to change wink
    At work I managed to fight back for two years, but at least I got the third monitor to lessen the pain of W10+Office365...

    At least somewhat related to the discussion about multiple GPU:s, the further you stay away from integrated graphics, the better your system works.

    Hardware vendors are. Newer MOBO and such are not making win7 drivers anymore. Had to switch to 10, had I known that, I would have bought an older mobo, just a higher tier or something. Them is the breaks I guess. Ripped out as much of the crap I don't need as I could, no telemetry or auto updates, or that stupid AI lady or whatever she is lol. Not nearly as slim as I got win7, but it has to do. Now that I am transferring over to a blender pipelin, spending less and less on the windows partition, and more on the linux xfc one. It is slim as helllllllll, even compared to win7. Still need win for a lot of my graphics programs unfortunately.

  • TheMysteryIsThePointTheMysteryIsThePoint Posts: 3,027
    edited September 2020
    nicstt said:

    The title of this thread is about Nvidia and replacement cards.

    Why are you specifically targetting me when I'm talking about matters related to this thread title?

    Im done talking to you.

    Because you are credibly contradicting him. He doesn't like it when people do not just accept whatever he writes because of all the caps and exclamation points.

    Post edited by TheMysteryIsThePoint on
  • outrider42outrider42 Posts: 3,679
    edited September 2020

    *bangs head against wall*

    dedicated display cards are POINTLESS!

    Windows always reserves VRAM on every consumer grade video card as if it hard a monitor connected. It may even actually have the card try to output a signal. That is less clear. But there is clearly nothing to be gained by not having every Nvidia card installed selected for use in rendering.

    This myth that there is some benefit to having dedicated cards needs to die. People are wasting money and resources on it.

    Um guys....do you know that Windows 10 2004 update mostly fixes this issue?

    Now Windows 10 only reserves a flat 900 MB (that is Megabytes) of VRAM REGARDLESS of VRAM capacity. It no longer takes a percentage like it used to.

    On my 1080tis, they used to report having 9.1 GB available. But after updating to 2004, well, here it is straight from my Daz help file:

    2020-09-15 22:28:59.187 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1080 Ti): compute capability 6.1, 11.000 GiB total, 10.041 GiB available
    2020-09-15 22:28:59.189 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce GTX 1080 Ti): compute capability 6.1, 11.000 GiB total, 10.039 GiB available, display attached

    Boomshockalocka!

    As you can see, they now report over 10GB available. So you guys do not need to freak out over whether the 3090 has TCC mode or not. The 3090 should report 23GB of available VRAM. I think this is fair. It is certainly better than it was before, and you don't have to shell out extra cash for a Titan or Quadro just to get TCC mode and a single extra GB of VRAM.

    This is something that perhaps @RayDAnt could test with his Titan RTX to see how much VRAM it reports with 2004.

    And this also means that you are not quite correct, kenshaw. Because while Windows does reserve VRAM, 900MB of VRAM is probably less than the display GPU is using to push the Daz app and Windows. Plus this also neglects if somebody uses the Iray preview mode in the viewport. With a dedicated display GPU they can choose to use the display GPU for Iray preview, so the rendering GPU is not burdened by Iray preview.

    One last note, before some of you rush out to install 2004 if you haven't already, the update for 2004 is a big one and will take some time to install. This one was much longer than my previous updates. At one point I started to wonder if my PC was not going to reboot, there was a long period of time of like 10 minutes where the screen was totally black with no indication of anything going on. So give yourselves plenty of time for this update. But yes, this update is excellent news for anybody who uses rendering software.

    Post edited by outrider42 on
Sign In or Register to comment.