NVIDIA RTX 2080 Ti

2

Comments

  • shaneseymourstudioshaneseymourstudio Posts: 383
    edited November 2018
    hollaback said:
    Update: I chatted with Nvidia today and they confirmed that NVLink with two 2080 ti's will not pool memory between the cards. Nvidia also indicated that a single Quadro RTX 5000 would be more powerful for rendering and animation vs two 2080 ti's.

    That is very interesting....I find it hard to believe so only because the RTX 5000 has 3072 cuda cores compared to w 2080ti's having a combined 8704 cuda cores...Unless the tensor cores do more than the cuda...also when the rep referenced rendering I wonder what render engine they were thinking of. I have no clue what render engines take advantage of the tensor cores. I would be very interested to see a comparison between these cards as I am back and forth about whether to go quadro or not. The obvious benefit with quadro is the full amount of vram that it has vs the 2080ti due to the drivers. The only thing is with the nvlink for the quadro series. I am looking for the page that I saw but it showed that you have to have a 6000 or 8000 series before it would combine the vram with nvlink. I will post that link once I find it.

     

    ***UPDATE

    Nope I was wrong. The NVLINK bridge link near the bottom of the Quadro RTX 5000 page says that it indeed combines the memory of those cards. The only difference is xfer rate at 50GB/s vs Quadro RTX 6000/8000 being 100GB/s:

    https://www.nvidia.com/en-us/design-visualization/quadro/rtx-5000/

    I think that just made my decision. I probably will end up going with Quadro RTX 5000 vs 2080ti's. It looks like they release middle of this month so hopefully will have some more info by next month on performance.

    Post edited by shaneseymourstudio on
  • rames44rames44 Posts: 332

    Stonemason’s stuff looks awesome, but can be huge. And many modern characters have many, many textures, all at 4K by 4K.  That adds up quickly.

    You might want to look at the Scene Optimizer product - it’ll show you how many textures are in your scene, and can scale them down so they take less memory. (Halving the dimension of a texture reduces the number of pixels by a factor of 4, and you need to be REALLY close to most things before you come anywhere needing 4K maps...)

  • nicsttnicstt Posts: 11,715

    The quadro 5k is actually beginning to look like a semi-decent  buy (RTX) version; and a better buy than the 2080ti

  • nicsttnicstt Posts: 11,715

    What I wonder is, will it be supported in Studio? A bit expensive for a paperweight.

     

  • Did you guys read the attached FAQ PDF on that NVIDIA page? It clearly says:

    Q: What is NVIDIA’s ray tracing strategy?
    A: To bring AI and further GPU acceleration to graphics, NVIDIA continues to significantly focus on developing SDKs and technologies for software development partners who create professional raytracing products.
    With this emphasis, NVIDIA has made product development changes around the Iray plugin products.
    NVIDIA will focus on bringing GPU accelerated ray tracing technology to every rendering product out there. Therefore, it further invests into core rendering technology, like:
    > NVIDIA OptiX real-time ray tracing platform for NVIDIA GPU architectures.
    > MDL, the open Material Definition Language.
    > NVIDIA IndeX, our platform technology for visualization and computing of multi-valued volumetric and embedded geometry data.
    > NVIDIA Iray for physically based rendering platforms focusing on the CAD, product and architectural design markets.

    Emphasis mine.

    TL;DR -- They don't support plugins anymore but they are developing the core Iray SDK and vendors are integrating that in their applications.

    @hollaback:
    Could you post any benchmark of single 2080 Ti? I wonder whether there is any raw performance difference to 1080 Ti (without RT/Tensor cores). If you can test using DAZ 4.11 Public Beta it would be nice.

    This is what was being discussed on the previous page, it is not relevant to DS as the DS plug-in has always been developed by Daz rather than nVidia.

  • outrider42outrider42 Posts: 3,679

    When it comes to Iray, I believe that Turing is able to use the drivers originally made for Volta. The only Volta GPU is the Titan V. The Titan V also happens to have Tensor cores, and so I believe that currently Tensor is working for Iray. There is a long running benchmark thread in my sig. The 2080ti seems to be rendering about twice as fast as a 1080ti. So when you think about that, the Tensor cores have to be factoring in somehow. To see the benchs hit my sig, and go to the last few pages when Turing released.

    Also, the VRAM stacking thing must be enabled PER SOFTWARE. It is not enabled out of the box. But for everybody saying that the 2080ti cannot stack VRAM, check this post out from V-Ray Facebook:

    https://www.facebook.com/groups/VRayRT/permalink/1156723277813577/

    They have V-Ray scaling up to 22 GB, proving once and for all that VRAM stacking is possible. But no, Daz Iray does not support it at this time, and it is up to whoever is handling Iray development to enable it.

  • When it comes to Iray, I believe that Turing is able to use the drivers originally made for Volta. The only Volta GPU is the Titan V. The Titan V also happens to have Tensor cores, and so I believe that currently Tensor is working for Iray. There is a long running benchmark thread in my sig. The 2080ti seems to be rendering about twice as fast as a 1080ti. So when you think about that, the Tensor cores have to be factoring in somehow. To see the benchs hit my sig, and go to the last few pages when Turing released.

    Also, the VRAM stacking thing must be enabled PER SOFTWARE. It is not enabled out of the box. But for everybody saying that the 2080ti cannot stack VRAM, check this post out from V-Ray Facebook:

    https://www.facebook.com/groups/VRayRT/permalink/1156723277813577/

    They have V-Ray scaling up to 22 GB, proving once and for all that VRAM stacking is possible. But no, Daz Iray does not support it at this time, and it is up to whoever is handling Iray development to enable it.

    That FB post doesn't show what the poster claims.No where in the images psoted does he show a 22Gb pool. All he shows is 2 SLI bridges active just as should be expected. Nvidia has made very clear that the 20xx cards may have NVLink connectors but do not have the rest of the NVLink infrastructure. Now Nvidia could be lying, wouldn't be the first time, but they would be setting themselves up to cannabalize their Quadro sales which is a big chunk of their profits which they wouldn't likely do.

  • outrider42outrider42 Posts: 3,679

    Tensor cores on RTX can only work for AI denoising if you use DAZ Studio Public Beta 4.11 and enable AI denoising option.

    Real speedup from RT (raytracing) cores is yet to be seen since current version of Iray SDK used in DAZ Studio does not seem to use them yet.

     

    Then why does the 2080ti render twice as fast as the 1080ti when pure CUDA performance is not double the performance of a 1080ti? That would seem to suggest something else is going on besides just CUDA. Its obviously not RT. Nobody has ran a Titan V bench for Iray that I know of except for Migenius. In their bench the Titan V was over 3 times faster than the previous Titan and 1080ti. Obviously the Titan V is not 3 times more powerful, and no other program shows a performance gain like this for the Titan V. It is getting a boost, and that boost comes from Tensor. Migenius is one of the companies selling the Iray plug in for Nvidia.

     

    When it comes to Iray, I believe that Turing is able to use the drivers originally made for Volta. The only Volta GPU is the Titan V. The Titan V also happens to have Tensor cores, and so I believe that currently Tensor is working for Iray. There is a long running benchmark thread in my sig. The 2080ti seems to be rendering about twice as fast as a 1080ti. So when you think about that, the Tensor cores have to be factoring in somehow. To see the benchs hit my sig, and go to the last few pages when Turing released.

    Also, the VRAM stacking thing must be enabled PER SOFTWARE. It is not enabled out of the box. But for everybody saying that the 2080ti cannot stack VRAM, check this post out from V-Ray Facebook:

    https://www.facebook.com/groups/VRayRT/permalink/1156723277813577/

    They have V-Ray scaling up to 22 GB, proving once and for all that VRAM stacking is possible. But no, Daz Iray does not support it at this time, and it is up to whoever is handling Iray development to enable it.

    That FB post doesn't show what the poster claims.No where in the images psoted does he show a 22Gb pool. All he shows is 2 SLI bridges active just as should be expected. Nvidia has made very clear that the 20xx cards may have NVLink connectors but do not have the rest of the NVLink infrastructure. Now Nvidia could be lying, wouldn't be the first time, but they would be setting themselves up to cannabalize their Quadro sales which is a big chunk of their profits which they wouldn't likely do.

    Well then will you believe this?

    https://www.chaosgroup.com/blog/profiling-the-nvidia-rtx-cards

    They actually made benchmarks in nvlink, including a scene so large that it would not render on a single GPU. It needed the pooled VRAM to run.

    I don't know why so many think this will encroach on Quadro. Quadro can do a lot of things besides being a big hunk of VRAM. Nvidia still benefits if people are buying multiple $1200 products plus the nvlink. If you are doing CAD for a big company, you are still getting Quadro. Only small indie creators are going to even be thinking of the gaming cards. And dont forget that Nvidia's EULA explicitly prohibits using any gaming cards in workstations. So they probably feel they have that covered.

  • Like I said it was always possible that Nvidia lied. And yes this will cannabalize Quadro sales. An enormous number of Quadro's are sold to Hollywood strictly for the purpose of 3d rendering. Think about the horsepower it takes to render all the CGI coming out of Hollywood these days. Pretty much none of that fits in 11Gb frame buffers so that leaves Quadro at insane prices. If those guys can instead build HEDT, think TR rigs, with 2080ti's and save serious money they will. The only question is will 22 Gb minus overhead be enough? I have no idea. Not my field. But you can be sure that there are folks out there who are paying attention. It has to hurt every time they have to build a rig that cost north of 10k.

  • outrider42outrider42 Posts: 3,679

    Like I said it was always possible that Nvidia lied. And yes this will cannabalize Quadro sales. An enormous number of Quadro's are sold to Hollywood strictly for the purpose of 3d rendering. Think about the horsepower it takes to render all the CGI coming out of Hollywood these days. Pretty much none of that fits in 11Gb frame buffers so that leaves Quadro at insane prices. If those guys can instead build HEDT, think TR rigs, with 2080ti's and save serious money they will. The only question is will 22 Gb minus overhead be enough? I have no idea. Not my field. But you can be sure that there are folks out there who are paying attention. It has to hurt every time they have to build a rig that cost north of 10k.

    You can only link two 2080ti's, and no 22GB is not enough, not even close to being enough for Hollywood. I remember Pixar talking about their scenes being well over 100GB, and that was several years ago. I'm sure they are even bigger today. The entire reason why Hollywood uses CPU render farms is because GPU rendering has been too limited. Quadro changes that, and the DGX boxes go even further. The DGX-2 has 512GB of VRAM pooled with 16 GPUs using the fastest Nvlink Switch that Nvidia can make. If you are making big Hollywood movies this is the box you buy. It costs $500K, but if you are big budget Hollywood, that's nothing. For your smaller studios, the DGX-1 is around for $68K.

    Lets review the Quadro line

    There are two cards to compare. The RTX 5000 is $2300, which is a bit less than the cost of buying two 2080ti's. This card has 16GB, so it has less VRAM...maybe. We do not know how much memory actually gets pooled yet. It most certainly is not the full 22GB bank. As stated from the link I posted, some data must be duplicated for performance, and some must be reserved. So realistically we may be looking at 16-18GB of actual VRAM for 2080ti in Nvlink mode. You do get a lot more CUDA and Tensor, though. But you must remember there is a speed penalty by using Nvlink mode, so you are not getting the full benefit of these cores. And then there is TCC mode, which I'll get into later. So in reality the RTX 5000 is not threatened at all by the 2080ti in Nvlink.

    The next one is the RTX 6000. This card is similar to the 2080ti, except it features the full Turing chip, where the 2080ti is ever so slightly cut down. So the 6000 is a more powerful card as it is. It has 24GB of VRAM. Perhaps two 2080ti's can go a bit faster, but that depends on application, and again, you get a speed penalty for running in Nvlink mode. So it may only be a bit faster than a 6000, especially if you can enable the TCC mode that Quadro can use. TCC can impact rendering quite a bit, and all Quadro can use it. The Titan Volta can as well, and when run in TCC mode the Titan V renders MUCH faster. So the 5000 and 6000 have this big advantage right out of the gate. For many the lack of TCC for gaming cards will be a deal breaker.

    This chart here shows the difference that TCC makes for the Titan V.

  • I have no doubt Pixar is stuck with the workstations. Pixar isn't the only CGI operation out there though. There are lots and lots of shops out there doing a whole lot of CGI work. Just watch the credit roll of any action movie. Those smaller operations are where the pressure will be. Even the DGX-1 may be a little pricey for them. A bunch of rigs each with 2 2080ti's pooling their memory might get very appealing to Vray renderers who have always been able to do distributed rendering but had little impetus to do so if each card was limited to its own memory.

    The number of boxes I can build for that 68k is kind of amazing. Assuming VRay DR works as advertised, again not my thing so I have no idea, it would throroughly crush the DGX-1. 

  • So can anyone here recommend a GTX video card that is capable of rendering simple scenes in under a minute? When I say "simple scenes", I'm talking about those that have 1-2 characters(Genesis 8, or Genesis 8 + Genesis 2), basic ground texture, 1-2 walls, 3-4 spotlights, etc. I've been trying to put together a list for a new Windows 10 Pro system and I'm currently stuck on the video card. Originally I was going to have 2x GTX 1080 Ti's, but they stopped production and the company I plan on having build my system no longer offers them. They only offer(besides the obvious lower performance cards)

    GTX 1070 Ti 8GB $399, GTX 1080 8GB $469, GTX TITAN Xp 12GB (Pascal) $1449, RTX 2070 8GB $455, RTX 2080 8GB $834, RTX 2080 Ti 11GB $1359.

    Both the Titan Xp and RTX 2080 Ti are completely out of the question since they are slightly above my budget and I still need to put money towards a decent case & fans for plenty of airflow. I'm aslo going with a 1200-watt PS so I don't have to worry about it later when I add an additional card.

    Right now it just sucks bad trying to experiment with lighting and such using a GTX 765M 2Gb VRAM where rendering takes like an hour for simple scenes. I'm thinking about going with a GTX 1080 8GB for now and holding off on the newer 2000 series cards. Just not 100% sure at this point. frown

  • The 1070ti and the 1080 are roughly equivalent in performance. I'd save the money if you can get them 1070ti for $70 less. If you can swing the extra money I'd get the 2070 over the 1080 as the RT stuff should result in faster rendering. Are you planning to get 2 GPU's now and to add a 3rd later? If so get the 1200 watt PSU, and get a 80+ titanium. if you only plan on 2 GPU's total you can drop down to a 1000 watt pretty safely, unless you intend to go to something like 2 2080ti's. 

  • ParadigmParadigm Posts: 421

    Anyone else been having issues with the 2080TI in the newest public beta? I posted in the commons that the viewport leverages it great but when I try to render it will only use the CPU despite the scene working fine on my old 970. This seems to only be the case on some scenes, but do we know what it is exactly? I've been working on a comic strip so I edit the same scene over and over and I have 11 pages already and really can't just be like "oh well, guess this scene is useless."

  • The 1070ti and the 1080 are roughly equivalent in performance. I'd save the money if you can get them 1070ti for $70 less. If you can swing the extra money I'd get the 2070 over the 1080 as the RT stuff should result in faster rendering. Are you planning to get 2 GPU's now and to add a 3rd later? If so get the 1200 watt PSU, and get a 80+ titanium. if you only plan on 2 GPU's total you can drop down to a 1000 watt pretty safely, unless you intend to go to something like 2 2080ti's. 

    Yes, RT stuff should result in faster rendering, but does it? Does DS/Iray even support the new RT feature on these cards? Also, from what I understand, you have to use a specific version of drivers for the newer 2000-series cards in order for Iray to work correctly in DS. If the answer is "no" to the previous questions, it seems a bit of a gamble to purchase these newer cards at this time, not to mention the unknown wait time for the necessary updates. I should also comment that some of those cards produce heat issues within the case when you have more than one due to how the fins are oriented and how the heat is dissipated from them. If you have water cooling for them, its probably not an issue, but I'm not interested in water cooling for my graphics card(s).

    Right now I just want to get a single GPU and add a second one later on. I'm fairly certain a 1200-watt 80+ platinum will be more than sufficient in regards to PS needs. I'm not going to be rendering or running the system under heavy load 24/7, so it wouldn't justify the extra cost for an even higher efficiency on the PS.

  • The 1070ti and the 1080 are roughly equivalent in performance. I'd save the money if you can get them 1070ti for $70 less. If you can swing the extra money I'd get the 2070 over the 1080 as the RT stuff should result in faster rendering. Are you planning to get 2 GPU's now and to add a 3rd later? If so get the 1200 watt PSU, and get a 80+ titanium. if you only plan on 2 GPU's total you can drop down to a 1000 watt pretty safely, unless you intend to go to something like 2 2080ti's. 

    Yes, RT stuff should result in faster rendering, but does it? Does DS/Iray even support the new RT feature on these cards? Also, from what I understand, you have to use a specific version of drivers for the newer 2000-series cards in order for Iray to work correctly in DS. If the answer is "no" to the previous questions, it seems a bit of a gamble to purchase these newer cards at this time, not to mention the unknown wait time for the necessary updates. I should also comment that some of those cards produce heat issues within the case when you have more than one due to how the fins are oriented and how the heat is dissipated from them. If you have water cooling for them, its probably not an issue, but I'm not interested in water cooling for my graphics card(s).

    Right now I just want to get a single GPU and add a second one later on. I'm fairly certain a 1200-watt 80+ platinum will be more than sufficient in regards to PS needs. I'm not going to be rendering or running the system under heavy load 24/7, so it wouldn't justify the extra cost for an even higher efficiency on the PS.

    The benchmark thread has people benchmarking the 20xx cards running the public beta and the performance improvement is pretty significant even without RT. RT support for iRay is supposed to be coming soon.

    If you have a decent case for cooling I wouldn't worry overmuch about the heat from the 20xx cards.I think we're seeing some issues with the manufacturers working the kinks out of their fans curves more than anything. But if you are sure you getting 2 cards then be sure you case has good ventilation.

    What I was saying was you probably don't need a 1200 watt PSU for just 2 GPU's. You can probably get by on a 1000 or 1100 watt PSU. Keep in mind the all PSU's are most efficient at about 50% of their maximum rated draw and 2 GPU, of roughly 1080 or 2070 class, plus the rest fo a system, assuming you don't have a bunch of HDD's or other hardware that draws a lot of power, will come in around 500 or 550 watts.

  • outrider42outrider42 Posts: 3,679

    The 1070ti and the 1080 are roughly equivalent in performance. I'd save the money if you can get them 1070ti for $70 less. If you can swing the extra money I'd get the 2070 over the 1080 as the RT stuff should result in faster rendering. Are you planning to get 2 GPU's now and to add a 3rd later? If so get the 1200 watt PSU, and get a 80+ titanium. if you only plan on 2 GPU's total you can drop down to a 1000 watt pretty safely, unless you intend to go to something like 2 2080ti's. 

    Yes, RT stuff should result in faster rendering, but does it? Does DS/Iray even support the new RT feature on these cards? Also, from what I understand, you have to use a specific version of drivers for the newer 2000-series cards in order for Iray to work correctly in DS. If the answer is "no" to the previous questions, it seems a bit of a gamble to purchase these newer cards at this time, not to mention the unknown wait time for the necessary updates. I should also comment that some of those cards produce heat issues within the case when you have more than one due to how the fins are oriented and how the heat is dissipated from them. If you have water cooling for them, its probably not an issue, but I'm not interested in water cooling for my graphics card(s).

    Right now I just want to get a single GPU and add a second one later on. I'm fairly certain a 1200-watt 80+ platinum will be more than sufficient in regards to PS needs. I'm not going to be rendering or running the system under heavy load 24/7, so it wouldn't justify the extra cost for an even higher efficiency on the PS.

    RT cores do not work yet. We do not have an ETA on when they will. Like kenshaw said the 2000 cards still offer a big render speed boost even without the RT cores working.

    In order to use the 2000 series with Daz Studio, you need the beta version of Daz, which is 4.11. The 2000 series will not work in 4.10 Pro. You will need to update drivers, but you need to do that anyway for new GPUs. The Daz beta is a separate install from the full Daz Pro, so you can try the beta without deleting Pro. We do not have an ETA on when the beta features will ship into the full version of Daz, but it is only a matter of time. Besides supporting Turing, the big new feature of the beta is a new denoiser that actually works as advertised. The denoiser itself is a good enough reason to at least try out the beta even if you do not have a Turing GPU.

    1000 Watts is more than plenty, even for two 2080ti's. If you are running 24/7, good cooling important, and it is even more vital if you plan on ever doing multiple GPUs. PLAN YOUR AIRFLOW WISELY. Or liquid, whatever.

    I can point you to a place that does very good reviews of hardware, GamersNexus. Like most reviewers they are more focused on gaming, but much of their work will apply to rendering. They have extensive reviews on every part you use. In particular, their case reviews would be helpful, because cases are going to be universal whether gaming or rendering. They also review the cooling properties of video cards, which will be another thing to look at for you. They have some in depth reviews of power supplies, including an excellent video on what makes a good power supply "good." You can learn all sorts of things there. They sometimes use Blender in benchmarks, so they do have a few rendering themed benches.

    A case review

    And though you can't buy many new 1080ti's anymore, this video covers the different varieties of 1080ti's out there. It can still serve as a basis for some Turing coolers, as some board makers use basically the same coolers on new cards. It also shows that bigger does not always mean better, meaning the size of a GPU cooler does not mean it will actually cool better.

    And a really cool video about power supplies.

  • I haven't built a PC in the H500 yet but I've built multiple PC's in the H500P Mesh and I heatily recommend the case it is an excellent case. I'm assuming the H500 will also be just as good based on the review.

  • outrider42outrider42 Posts: 3,679

    I have no doubt Pixar is stuck with the workstations. Pixar isn't the only CGI operation out there though. There are lots and lots of shops out there doing a whole lot of CGI work. Just watch the credit roll of any action movie. Those smaller operations are where the pressure will be. Even the DGX-1 may be a little pricey for them. A bunch of rigs each with 2 2080ti's pooling their memory might get very appealing to Vray renderers who have always been able to do distributed rendering but had little impetus to do so if each card was limited to its own memory.

    The number of boxes I can build for that 68k is kind of amazing. Assuming VRay DR works as advertised, again not my thing so I have no idea, it would throroughly crush the DGX-1. 

    No they are not using workstations as much these days. Pixar entered a deal with Nvidia back in 2015, and in 2018 they have now added RTX to their famous RenderMan software engine. This video was from August 30.

    The DGX sounds super expensive, and it is, but you need to understand the value in it for large studios and companies. It is far, far cheaper than any massive CPU based workstation. It uses far less power and takes up much less physical space. When CEO Huang says "The more you buy, the more you save." he actually has a valid point beyond PR speak. The one and only limitation of GPU rendering used to be VRAM, but in past few years Nvidia has found ways around that, and that has changed the entire industry forever. That technology is only starting to filter its way down to consumer desktops with Nvlink.

  • AdruzoAdruzo Posts: 28

    Does any body know if this card speeds up rendering time (at least when combining two cards the memory REALLY doubles, not like before).

    Speeds up rendering time as compared to what? Are you upgrading from a 980Ti a 1070 a 1080Ti? How much of a differance you see in render times depends greatly on what you are uprading from. For example I used to render with two Titan z cards. I am now using two 1080Ti cards. For scenes that used lessthan 6GB of Vram the Titan Z's were about 15% faster than the two 1080Ti's, however any scene that used more than 6GB of Vram rendered entirely in the CPU. So I gave up a small amount of render speed for the ability to create larger scenes.

  • NotAnArtistNotAnArtist Posts: 390
    edited December 2018

    <Edit>...done.

    Post edited by NotAnArtist on
  • GoggerGogger Posts: 2,416
    edited January 2019
    Ok, I'm still sitting in my car in the parking lot after purchasing the MSI Geforce RTX 2080 Ti Gaming X Trio. The ONLY one in the store. I'll be returning the RTX 2080 Founder's edition I got a couple weeks ago after being told they wouldn't be getting any of the Ti versions again - I should have known better! I'll post my initial impressions after I get it installed and tested. Those extra CUDA cores and more memory better be worth the price!
    Post edited by Gogger on
  • davegv said:

     

    So you've got NVidia developing a AI version of IRay - while developing a series of graphics cards that use AI and you think NVidia is not going to further develope the IRay SDK

    Not a chance.

    . If you really think Nvidia is still developing IRAY you are in for a rude awakening.

    5 months later :)   I hope the reality has sunk in for you guys that were o so damn sure of yourselves.  The question  now is... what do we all do next?

  • GoggerGogger Posts: 2,416
    Gogger said:
    Ok, I'm still sitting in my car in the parking lot after purchasing the MSI Geforce RTX 2080 Ti Gaming X Trio. The ONLY one in the store. I'll be returning the RTX 2080 Founder's edition I got a couple weeks ago after being told they wouldn't be getting any of the Ti versions again - I should have known better! I'll post my initial impressions after I get it installed and tested. Those extra CUDA cores and more memory better be worth the price!

    I have been testing the RTX 2080 Ti out a bit and two things that are worthy of note so far are:
    1. I get some sort of representation of my scene within 20 seconds so can make adjustments and eventually start a final render in a MUCH shorter time than my gtx980 Ti provided.
    2. While rendering I can still use my computer for other things without it hanging and lagging.  I can even use Photoshop concurrently!

    I know the truly amazing benefits won't be in evidence until all the Iray and driver issues and full feature exploitation is in effect.  Whenever that finally happens I'll be ready with over 4000 CUDA cores and 11 GBs of memory, not to mention the benefits of AI - I'm excited for the future, today!

  • davegv said:

     

    So you've got NVidia developing a AI version of IRay - while developing a series of graphics cards that use AI and you think NVidia is not going to further develope the IRay SDK

    Not a chance.

    . If you really think Nvidia is still developing IRAY you are in for a rude awakening.

    5 months later :)   I hope the reality has sunk in for you guys that were o so damn sure of yourselves.  The question  now is... what do we all do next?

    I think your quotes are saying opposite things, so not sure what you are trying to establish.

  • davegv said:

     

    So you've got NVidia developing a AI version of IRay - while developing a series of graphics cards that use AI and you think NVidia is not going to further develope the IRay SDK

    Not a chance.

    . If you really think Nvidia is still developing IRAY you are in for a rude awakening.

    5 months later :)   I hope the reality has sunk in for you guys that were o so damn sure of yourselves.  The question  now is... what do we all do next?

    I think your quotes are saying opposite things, so not sure what you are trying to establish.

    Exactly... so what do we do?

Sign In or Register to comment.