From gtx1080ti to rtx2080ti?

Hello all,

sorry if this has been discuste before, i did try to find a good benchmark in the long thread. But i didnt fina exactly what i was looking for.

today i have a gtx1080ti card with i5-8400 and 16 GB ram. Doing animations take a really long time.

what do you think about rendertime if i buy a rtx2080ti? 50% faster? 10%?

br Daniel

 

Comments

  • kenshaw011267kenshaw011267 Posts: 3,805

    Based on the benchmarks for single images I'd guess somewhere around 10%. That might go up substantially if Nvidia ever enables the RTX features in iray.

  • Thanks for the answer. For now, the 2080ti is to expensive for just 10% faster renders.

     

    br Daniel

  • I haven't seen any actual benchmarks but people have been talking of a 50% speed increase just from the increase from 3500ish to 4300ish cuda cores and efficency gains. I'm not upgrading from my pair of 1080Ti GPUs until Nvidia updates IRAY and Daz adds that version to Studio to make use of the TENSOR cores. There's also talk of Nvidia changing to a 7nm process which will come through in 2020 amd that the TURING architecture will be replaced at the same time.
    S.

     

  • There's also the fact that a number of the 2080Ti cards(not sure about the 2080 8Gb versions) were having issues ranging from artifacts and/or dying after a few days or weeks of casual gaming. I don't know if the problem(s) have ever been corrected on Nvidia's side or if the same chance exists of ending up with one of these "busted" cards. I read a good article the other day published back in January(if I recall correctly) and they flat out stated the cost of these cards were way overpriced, unjustified, and that if you already have a 1080Ti, you're better off saving your money until these cards mature.

  • kenshaw011267kenshaw011267 Posts: 3,805

    There's also the fact that a number of the 2080Ti cards(not sure about the 2080 8Gb versions) were having issues ranging from artifacts and/or dying after a few days or weeks of casual gaming. I don't know if the problem(s) have ever been corrected on Nvidia's side or if the same chance exists of ending up with one of these "busted" cards. I read a good article the other day published back in January(if I recall correctly) and they flat out stated the cost of these cards were way overpriced, unjustified, and that if you already have a 1080Ti, you're better off saving your money until these cards mature.

    The card manufacturers said they had not received an unusual number of RMA's on the Turing cards. It seems that people made a big deal of the usual number of failures of hardware. 

  • RayDAntRayDAnt Posts: 1,147
    edited March 2019

    Thanks for the answer. For now, the 2080ti is to expensive for just 10% faster renders.

     

    br Daniel

    Fyi benchmarks like this one put the performance difference somewhere in the neighborbood of 60-90% faster. And that looks likely to increase by several HUNDRED percent once full RTX support comes later this year.

    Post edited by RayDAnt on
  • kenshaw011267kenshaw011267 Posts: 3,805
    RayDAnt said:

    Thanks for the answer. For now, the 2080ti is to expensive for just 10% faster renders.

     

    br Daniel

    Fyi benchmarks like this one put the performance difference somewhere in the neighborbood of 60-90% faster. And that looks likely to increase by several HUNDRED percent once full RTX support comes later this year.

    No it does not. 8 minutes 28 seconds for 1 1080ti vs 4 minutes 38 seconds for 1 2080ti. 278 seconds is 55% which is way off other benchmark runs.  Telling someone to expect more than about a 10% boost  is asking for trouble.until RTX is enabled.

  • RayDAntRayDAnt Posts: 1,147
    edited March 2019

    No it does not. 8 minutes 28 seconds for 1 1080ti vs 4 minutes 38 seconds for 1 2080ti. 278 seconds is 55%.

    I'm confused. Is that I typo or are you actually attempting to say that 55% is closer to 10% than it is to 60%?

     

     

    Telling someone to expect more than about a 10% boost  is

    ...is absolutely accurate based on the benchmarking data (some of which is original to me) currently available from multiple sources. Eg this comprehensive (pre RTX enabled) performance study from MiGenuis which puts the 2080ti at approximately 69% faster than the 1080ti for Iray rendering specifically.

    Post edited by RayDAnt on
  • kenshaw011267kenshaw011267 Posts: 3,805

    You made the claim of 60 to 90% and posted the link as your exemplar. The posted bench run did not show that. 

    That's not a useful bench at all. There is no report on which 1080ti or 2080ti were used. Since there are literally dozens of each, many of which are factory OC'd and all of which have slight differences in the PCB and cooler, the bench should specifically state which cards were used as the bench only applies to a direct comparison between those cards. So not a comprehensive study at all. Further the bench is fatally flawed in methodology. It simply compares time to 250 iterations. Not time to a complete, 100% converged, render. You have absolutely no idea what the image quality is after 250 iterations on each card simply that it did them so fast. Since what matters in reality is time to a complete render, the entire bench is of extremely low utility. That's why real bench runs of GPU's compare time to completion of a well known, and usually publicly available, image. I'll just let it stand that the lack of full information on the bench hardware also makes the entire test of even more limited utility. With no information on PSU or motherboard beyond which chipset it is there is no way to validate the results.

    That these guys are also making up their own metric, megapaths per second? WTF is a megapath?, also makes the bench of questionable utility. I just used google to look up the term and the only place its used in relation to rendering is these guys. 

    Now all that might not be apparent to everyone but I evaluate hardware for my day job and that was a bench I wouldn't even consider in an evaluation. That's why I always send people to the bench thread here on Daz when they ask about GPU performance. They use the same test, usually you found one using a different scene, and post details of their hardware allowing someone to make a true apples to apples comparison if that matters to them. If you actually look through the 1080ti and 2080ti runs you will not find any where the 2080ti cuts the render time of the bench by 90%. Even 60% is pretty rare. If I had to hazard a guess the average would be closer to 40% without actually gathering the data and crunching the numbers. But no one should ever expect to hit even the average when buying something. Unless it is an exact match for the HW and environment of the bench, which is essentially impossible to achieve, they should expect to achieve the minimums of the test. Which is why I told the poster 10% because claiming more would lead to disappointment if they didn't hit something higher.

  • RayDAntRayDAnt Posts: 1,147

    You made the claim of 60 to 90% and posted the link as your exemplar. The posted bench run did not show that. 

    I originally said, and I quote:

    Fyi benchmarks like this one put the performance difference somewhere in the neighborbood of 60-90% faster.

    "somewhere in the neighborhood of 60-90%" is accurate based on every benchmarking statistic so far sourced in this thread.

     

    That's not a useful bench at all. There is no report on which 1080ti or 2080ti were used.

    Because of the aggressive way Nvidia regulates internal power management in their GPU designs, recent Nividia GPUs are extremely limited in terms of performance headroom past their original design specifications (eg. see this study for more.) Meaning that the biggest performance variance you are ever going to see between different board-partner models of the same Nvidia GPU is around 10%. Which isn't really enough variance to significantly change the results of a study like that MiGenius one (especially since 16 out of the 22 GPUs benchmarked don't even offer different board-partner models/cooling setups.)  Hence why leaving off specific board-partner models is generally a moot point.

     

    Since there are literally dozens of each, many of which are factory OC'd and all of which have slight differences in the PCB and cooler,

    See above.

     

    Further the bench is fatally flawed in methodology. It simply compares time to 250 iterations. Not time to a complete, 100% converged, render.

    Pardon? Both the Daz user benchmark previously linked to and the MiGenuis are convergence % limited - not Iteration limited. Eg. the MiGenius study very clearly states:

    In order to ensure we are testing raw Iray performance we have developed a stand-alone benchmark tool based on Iray. Our tool renders a fixed scene multiple times and averages the results to ensure consistency. To ensure the results mean something for real-world use we utilise a non-trivial test scene, ensuring the GPUs have plenty of work to do. The image above is a fully converged version of our test scene.

    As a slight aside, 

    Note that these benchmarks are not performed in a way that they can be compared to the previous series of benchmarks migenius conducted which is why we are retesting even the older cards where possible. This is due to changes in Iray itself, new Iray versions often change the relationship between iteration count and quality which can affect our absolute measurements. However all relative measurements between cards within the benchmark are valid.

    Iray is an unbiased renderer. There is no such thing as 100% convergence in an unbiased rendering environment because it would take an infinite amount of time to complete. Consequently all unbiased rendering engines use some sort of cheat calculation method to approximate the level of true convergence in a scene (in Iray's case this is done by periodically comparing the currently calculated value of each pixel in the actively rendered image to what it was several iterations ago and then counting that pixel as converged against the total number of pixels in the image as a running convergence percentage.) Which is why convergence % actually makes for a much less useful statistic for deriving performance numbers than someone might think (actually less so than iteration counts, since scenes are natively rendered in iteration sized chunks.) But it's good enough in most cases.

     

    WTF is a megapath?

    The number of simulated light rays traced in a scene divided by 1,000. Admittedly it's a mostly useless statistic (since it appears to only be accessible to Iray developers.) But that doesn't detract from its usefulness for comparison purposes.

     

    If you actually look through the 1080ti and 2080ti runs you will not find any where the 2080ti cuts the render time of the bench by 90%. Even 60% is pretty rare. If I had to hazard a guess the average would be closer to 40% without actually gathering the data and crunching the numbers.

    Early last month I decided to finally bite the bullet and start gathering the data/crunching the numbers for that ENTIRE benchmarking thread. Long story short, out of the nearly 800 intelligible datapoints currently in that thread only SIX of them are set up (SAME Daz Studio version, SAME benchmarking scene) in such a way as to establish a baseline direct comparison between 1080ti and 2080ti performance. Here they are:

    1080ti #1 TRT: 8 minutes 28.28 seconds or 508.28 seconds (note: OptiX Off)
    1080ti #2 TRT: 7 minutes 34.25 seconds or 454.25 seconds (note: OptiX On)
    1080ti #3 TRT: 7 minutes 09.82 seconds or 429.82 seconds (note: OptiX On)
    1080ti #4 TRT: 7 minutes 55.32 seconds or 475.32 seconds (note: OptiX Off)
    For an average of 466.92 seconds.

    2080ti #1 TRT: 4 minutes 37.84 seconds or 277.84 seconds (note: OptiX Off)
    2080ti #2 TRT: 4 minutes 44.83 seconds or 284.83 seconds (note: OptiX On)
    For an average of 281.34 seconds.

    466.92 seconds is 1.66 times longer than 284.34 seconds. Meaning that these results indicate the 2080ti is currently 66% faster than the 1080ti for Iray rendering.

     

    Unless it is an exact match for the HW and environment of the bench, which is essentially impossible to achieve, they should expect to achieve the minimums of the test.

    The lowest quotable minimum statistic for 2080ti vs 1080ti Iray performance is currently 66% - not 10%.

Sign In or Register to comment.