Two graphic cards?

24

Comments

  • kyoto kidkyoto kid Posts: 41,245
    dragotx said:
    ebergerly said:

    You're much better off with a 1080 Ti than with two smaller cards. 

    But do what your budget lets you.  Not everyone just has $700 - $1000 to shill out for 1080 TI cards.

    Absolutely. For many people, going from a 30 minute render to a 15-20 minute render isn't worth the $300 premium. It's still a long damn render. 

    For me, I'm looking at a 1080 ti not for getting a 30 minute render down to 15, but for getting an 8 to 10 hour render down to whatever I can get it to.  And that's on ones that do manage to stay in the memory on my 1070.  I render at what I admit is an absurd resolution for most people and that's why it takes so long, but it's the resolution I want to render at.  If I use an HDRI for the environment and only one character, then I can knock them out in 10 to 15 minutes.  But for what I usually render there aren't any HDRIs that really fit the image I'm going for, so I have to stick with full 3d environments.  Also, for me, I find myself sticking anywhere from 4 to 10 characters in the scene.  So for those renders having that much more Vram is important all on it's own, not even considering the performance boost over the 1070.  Between the extra memory and IF I can really get a 40% boost by adding a 1080ti to my system, then $300 is a lot easier to justify

    ...yes!.

  • dragotx said:
    ebergerly said:

    You're much better off with a 1080 Ti than with two smaller cards. 

    But do what your budget lets you.  Not everyone just has $700 - $1000 to shill out for 1080 TI cards.

    Absolutely. For many people, going from a 30 minute render to a 15-20 minute render isn't worth the $300 premium. It's still a long damn render. 

    For me, I'm looking at a 1080 ti not for getting a 30 minute render down to 15, but for getting an 8 to 10 hour render down to whatever I can get it to.  And that's on ones that do manage to stay in the memory on my 1070.  I render at what I admit is an absurd resolution for most people and that's why it takes so long, but it's the resolution I want to render at.  If I use an HDRI for the environment and only one character, then I can knock them out in 10 to 15 minutes.  But for what I usually render there aren't any HDRIs that really fit the image I'm going for, so I have to stick with full 3d environments.  Also, for me, I find myself sticking anywhere from 4 to 10 characters in the scene.  So for those renders having that much more Vram is important all on it's own, not even considering the performance boost over the 1070.  Between the extra memory and IF I can really get a 40% boost by adding a 1080ti to my system, then $300 is a lot easier to justify

    Perhaps it's because I have a 4K monitor on Windows 10, I find going beyond 8GB absurdly easy to do.

    An Iray scene purchased here, 2 or 3 HD Genesis 3 figures, their clothing, hair, and perhaps an HDRI for lighting or background rendering at 4K and I'm over 8 GB, EZ PZ.

  • dragotxdragotx Posts: 1,138
    edited September 2017
    dragotx said:
    ebergerly said:

    You're much better off with a 1080 Ti than with two smaller cards. 

    But do what your budget lets you.  Not everyone just has $700 - $1000 to shill out for 1080 TI cards.

    Absolutely. For many people, going from a 30 minute render to a 15-20 minute render isn't worth the $300 premium. It's still a long damn render. 

    For me, I'm looking at a 1080 ti not for getting a 30 minute render down to 15, but for getting an 8 to 10 hour render down to whatever I can get it to.  And that's on ones that do manage to stay in the memory on my 1070.  I render at what I admit is an absurd resolution for most people and that's why it takes so long, but it's the resolution I want to render at.  If I use an HDRI for the environment and only one character, then I can knock them out in 10 to 15 minutes.  But for what I usually render there aren't any HDRIs that really fit the image I'm going for, so I have to stick with full 3d environments.  Also, for me, I find myself sticking anywhere from 4 to 10 characters in the scene.  So for those renders having that much more Vram is important all on it's own, not even considering the performance boost over the 1070.  Between the extra memory and IF I can really get a 40% boost by adding a 1080ti to my system, then $300 is a lot easier to justify

    Perhaps it's because I have a 4K monitor on Windows 10, I find going beyond 8GB absurdly easy to do.

    An Iray scene purchased here, 2 or 3 HD Genesis 3 figures, their clothing, hair, and perhaps an HDRI for lighting or background rendering at 4K and I'm over 8 GB, EZ PZ.

    Exactly!  And I use a lot of monsters and creatures in my render, more and more of them are coming at high SubD levels because of how much detail goes into their textures (like the new Ultimate Lady Zombie, who is AWESOME by the way).  I just finished setting up a scene using 1 of her, 3 Grave Walkers, and a G8 character and blew right past the 8gb like it wasn't even there (and that's after lowering the SubD).  With Scen Optimizer I can finagle it down low enough to actually fit on the card, but I'd rather not have to do that.  It's getting harder and harder not to give up and go buy a 1080TI

    Post edited by dragotx on
  • FWIWFWIW Posts: 320
    edited September 2017

    I do tend towards complex scenes that I literally can't render right now at all and have had to go back to portrait types, that still take 2 days to render. I am on an old envy laptop. In February we are buying a new one. I want to do as much research ahead of time as possible. My last render was only 1000 x 800, one G3, and like a handful of props. And it still took 2 days where I could not even touch my laptop. I know literally anything will be better but since this will be the last time in like the next 3 years I can upgrade (everything is planned that far out) I want to be sure to get the best I can afford. I do game some, but the most graphic intensive ones are like severely modded Skyrim, and Fallout/Borderlands type things. And I'm terrible at layering. I can't draw shadows convincingly to save my life. 

    Post edited by FWIW on
  • dragotxdragotx Posts: 1,138

    I do tend towards complex scenes that I literally can't render right now at all and have had to go back to portrait types, that still take 2 days to render. I am on an old envy laptop. In February we are buying a new one. I want to do as much research ahead of time as possible. My last render was only 1000 x 800, one G3, and like a handful of props. And it still took 2 days where I could not even touch my laptop. I know literally anything will be better but since this will be the last time in like the next 3 years I can upgrade (everything is planned that far out) I want to be sure to get the best I can afford. I do game some, but the most graphic intensive ones are like severely modded Skyrim, and Fallout/Borderlands type things. And I'm terrible at layering. I can't draw shadows convincingly to save my life. 

    If you're looking at a tight budget for an upgrade, if at all possible I highly recomend going with a desktop (even better if you can build it yourself) instead of a laptop, you will get a lot more bang for your buck that way.  

  • I do tend towards complex scenes that I literally can't render right now at all and have had to go back to portrait types, that still take 2 days to render. I am on an old envy laptop. In February we are buying a new one. I want to do as much research ahead of time as possible. My last render was only 1000 x 800, one G3, and like a handful of props. And it still took 2 days where I could not even touch my laptop. I know literally anything will be better but since this will be the last time in like the next 3 years I can upgrade (everything is planned that far out) I want to be sure to get the best I can afford. I do game some, but the most graphic intensive ones are like severely modded Skyrim, and Fallout/Borderlands type things. And I'm terrible at layering. I can't draw shadows convincingly to save my life. 

    That's why I posted what I typically render, letting you know what I do that typically goes past 8GB.

    There's a bunch of tricks like lowering the resolution of textures on items further from the camera, sharing textures when you can, etc but I found lots of that was also a big time sink. 

  • FWIWFWIW Posts: 320

    Oh 100% getting a desktop. The only reason I got a laptop before was because I literally had no place to put a desktop. I was living out of one room that I was sharing. (not a one bedroom, one ROOM)

  • bluejauntebluejaunte Posts: 1,923
    ebergerly said:

    You're much better off with a 1080 Ti than with two smaller cards. 

    But do what your budget lets you.  Not everyone just has $700 - $1000 to shill out for 1080 TI cards.

    Absolutely. For many people, going from a 30 minute render to a 15-20 minute render isn't worth the $300 premium. It's still a long damn render. 

    20 vs 30 mins is huge. You're not just rendering one image. It's dozens, hundreds, thousands. A hundred times 20 vs 30 mins equals over 16 hours saved. And then add viewport rendering speed during lookdev and VRAM on top. IF that's not worth the money I don't know what is.

  • ebergerlyebergerly Posts: 3,255
    edited September 2017

    20 vs 30 mins is huge. You're not just rendering one image. It's dozens, hundreds, thousands. A hundred times 20 vs 30 mins equals over 16 hours saved. And then add viewport rendering speed during lookdev and VRAM on top. IF that's not worth the money I don't know what is.

    I suppose if you're just sitting there waiting and wasting time doing nothing, then yeah you make a good point. Especially if you're in a business and time is money. Though I'm guessing for hobbyists like me we go off and do other stuff (browse the web, go to DAZ forum, watch videos, get some lunch, etc.), and since it's a hobby it's not all that critical. Though I agree at a certain point it gets real annoying, but the question is whether a 30 vs 20 minute render by itself, or even if you do 20 in a day, is that big a deal. I guess it's up to the individual to answer that. And yeah, over a lifetime it adds up, but it all depends on the annoyance factor for each person.  

    Personally, my renders tend to peak at maybe 30 minutes at most, usually more like 5-10 minutes. So it's tough for me to justify paying $300 bucks to go from 10 minutes to 5 minutes. But that's just me.

    And you mention viewport rendering, which was the primary reason I was considering getting a second or faster GPU. I found the magic settings to make the viewport response almost instantaneous, so my need for a new GPU dropped drastically.  

    Post edited by ebergerly on
  • bluejauntebluejaunte Posts: 1,923
    ebergerly said:

    20 vs 30 mins is huge. You're not just rendering one image. It's dozens, hundreds, thousands. A hundred times 20 vs 30 mins equals over 16 hours saved. And then add viewport rendering speed during lookdev and VRAM on top. IF that's not worth the money I don't know what is.

    And you mention viewport rendering, which was the primary reason I was considering getting a second or faster GPU. I found the magic settings to make the viewport response almost instantaneous, so my need for a new GPU dropped drastically.  

    What settings are those? That sounds intrguing.

  • Kevin SandersonKevin Sanderson Posts: 1,643
    edited September 2017

    From iray documention at NVIDIA -- http://irayrender.com/fileadmin/filemount/editor/PDF/iray_Performance_Tips_100511.pdf

    Page 4 -- "iray does not use NVIDIA Scalable Link Interface (SLI). However, performance tests showed that SLI
    can significantly slow down iray, hence it should be disabled.
    SLI can be disabled within the NVIDIA Control Panel."

    This is also true for Octane Render.

    Post edited by Kevin Sanderson on
  • outrider42outrider42 Posts: 3,679

    Before the 1080ti dropped, I was fully in the camp of buying two 1070's. However, the 1080ti came along and was far better than what many were expecting. I thought that card would be almost $1000. Its nice to be wrong there, but I think Nvidia at the time was worried about AMD Vega. The 1080ti came out shortly after the WILDLY successful AMD launch of Ryzen, and I believe that woke Nvidia up a bit. Nvidia wants to keep their performance crown.

    And then the mining boom happened. That drove up the prices of most GPUs across the board...except for the 1080ti. The 1070 should be a $350 card right now, but instead it is running over $450. That is higher than it was at its launch a full year ago, which is absurd. While this inflation continues, it makes the 1080ti a far, far better deal. Consider all of this:

    Using one 1080ti will use less electricity than two 1070s.

    One 1080ti has more VRAM than two 1070s.

    Two 1070s will have more total CUDA cores than one 1080ti, but you also have to factor that using two cards is not a full 100% boost. Plus it takes a bit longer to get a scene loaded on multiple GPUs.

    And right now, the cost of two 1070s is greater than a single 1080ti. And they do not offer better performance. IMO the answer is a no brainer.

    Now a funny thing is happening. Nvidia is doing something they have never done before. They are going to release a 1070ti. Yes, you read that correctly, the ti brand is being used on a x70 card. This upcoming 1070ti might be compelling, it is much, much closer to a 1080 in performance. Price will be a big question, and it remains to be seen how this card performs for Daz. My guess is it will be very close to a 1080, as the 1070ti will only have 100 less CUDA cores than a 1080 while running at the same base clock as the 1080. So this card, if it can stay away from the mining inflation, could offer a true boost over a single 1080ti in a multi-GPU rig. But it would still have less VRAM than a 1080ti, so there is a trade off. Keep in mind these are rumored specs, but the source has been correct in other cases. It does seem like an odd thing to do, as it would really cannibalize the 1080, though I am sure Nvidia has made their bank on the 1080 in the year it has been out.

    http://www.pcgamer.com/heres-how-the-geforce-gtx-1070-ti-might-stack-up-against-other-pascal-cards/

    It is obvious that yet again Nvidia is reacting to AMD, which is awesome for us consumers.

  • outrider42outrider42 Posts: 3,679
    ebergerly said:

    Scott, I'm not a games guy (actually never played a video game), but I've seen enough tech videos to know that "awesome" results might be irrelevant for some. I'm looking at one right now where the lowest framerates for a game with the GTX 1080 were 44-46 FPS, and the exact same game with the GTX 1080ti were 48-58 FPS (the range is for 0.1% to 1% average). 

    Now, how many people would consider that "awesome" and want to pay the extra money? Seems to me that would be in the "barely noticeable" category. 

    Now expand that to the same % improvement in a game with minimums in the 80-90 FPS range? Is that high a framerate REALLY noticeable? Heck, movies on a big screen in the theater are in the 30 FPS range. Can you really tell the difference? 

    If you are not a gamer, it is really not wise to make this argument. The 30 fps for movies is for MOVIES. Have you ever noticed the motion blur in a movie? That is because of the low frame rate, and it is actually only 24 fps as the Hollywood standard that has been around for a hundred years. 24 became the standard for the simple fact that it was the lowest accepted frame rate that viewers would allow. We have become so accustomed to it that a 60 fps movie actually made people feel weird (see reviews of The Hobbit, and in particular, the Soap Oprah Effect.) A movie is a very different thing than a video game, it is a passive experience. Gaming interaction changes things, and the demands are different.

    60 fps in a video game is not just about the visual fidelity of the frames on the screen. It is also about control. You see, a game that is running at 60 fps is also more responsive than a 30 fps game. The time that a frame spends on screen determines how responsive it is. This is a big deal in any fast paced game. It is not as big a deal in a slower, turn based RPG, but for racing games, sports games, first person shooters, you WANT 60 fps or better because it also effects your performance in that game. Exceeding 60 is gravy, and can improve the response time even further. Even if the player cannot perceive the difference in the visual, they will FEEL the difference in how it plays. But make no mistake, many people can see a difference, too.

    To elaborate, trying to aim with a first person shooter game is more challenging at a low frame rate, because the aim will jump around more. Frame drops and stutter make this even worse. With 60 fps or better, this will be much smoother, and is a very noticeable thing to any gamer. A racing game will feel like it is in slow motion compared at 30 fps vs 60. 45 fps is NOT that good, either, because in most cases the screen will only display 30 fps. Most displays will only show specific frame rates, like 30, 60 or 90. So when a game runs at 45, the display only runs at 30. The displays that can adapt to these in between frame rates are expensive, thus only a serious gamer who quite likely has an expensive GPU to go with it will have one. But you want to stay above 60 for this reason.

    It can even go beyond control. Some video games will be "smarter" at higher frame rates. In many games, the frame rate also ties into the AI and physics. In a popular game I play, the AI is limited to how many actions it can take in a frame. 2 to be exact. Think of this as the AI's thought process. So the AI can only think of 2 things during a frame. If the game runs at 30 fps, the AI effectively only thinks half as much as it does at 60 fps. The game was designed for 60 fps. And this makes the AI much "dumber" and easier to exploit. (Sometimes players might purposely gimp their frame rate to get an advantage, lol.) There are also games where physics gets out of wack at different frame rates. On a laptop, I have seen some AI just standing around, because they are waiting for their turn to "think." So I shot them, lol. On my desktop, the difference is clear, as the AI will react to my movements more. Conversely, playing at very high frames per second in this game can make the AI quite deadly, but this can also make them more entertaining to engage in battle.

    So all of this stuff matters, and can be directly effected by frame rates. A movie would never have this problem. But...try watching a VR video at 30 FPS...lol, That can make people physically sick. For VR, you WANT 60, or even 90 fps. It has to be high and responsive.

  • kyoto kidkyoto kid Posts: 41,245

    I do tend towards complex scenes that I literally can't render right now at all and have had to go back to portrait types, that still take 2 days to render. I am on an old envy laptop. In February we are buying a new one. I want to do as much research ahead of time as possible. My last render was only 1000 x 800, one G3, and like a handful of props. And it still took 2 days where I could not even touch my laptop. I know literally anything will be better but since this will be the last time in like the next 3 years I can upgrade (everything is planned that far out) I want to be sure to get the best I can afford. I do game some, but the most graphic intensive ones are like severely modded Skyrim, and Fallout/Borderlands type things. And I'm terrible at layering. I can't draw shadows convincingly to save my life. 

    ..I'd go for a desktop over another notebook. You can get a lot more performance for the same price.

  • kyoto kidkyoto kid Posts: 41,245
    edited September 2017
    ebergerly said:

    Scott, I'm not a games guy (actually never played a video game), but I've seen enough tech videos to know that "awesome" results might be irrelevant for some. I'm looking at one right now where the lowest framerates for a game with the GTX 1080 were 44-46 FPS, and the exact same game with the GTX 1080ti were 48-58 FPS (the range is for 0.1% to 1% average). 

    Now, how many people would consider that "awesome" and want to pay the extra money? Seems to me that would be in the "barely noticeable" category. 

    Now expand that to the same % improvement in a game with minimums in the 80-90 FPS range? Is that high a framerate REALLY noticeable? Heck, movies on a big screen in the theater are in the 30 FPS range. Can you really tell the difference? 

    If you are not a gamer, it is really not wise to make this argument. The 30 fps for movies is for MOVIES. Have you ever noticed the motion blur in a movie? That is because of the low frame rate, and it is actually only 24 fps as the Hollywood standard that has been around for a hundred years. 24 became the standard for the simple fact that it was the lowest accepted frame rate that viewers would allow. We have become so accustomed to it that a 60 fps movie actually made people feel weird (see reviews of The Hobbit, and in particular, the Soap Oprah Effect.) A movie is a very different thing than a video game, it is a passive experience. Gaming interaction changes things, and the demands are different.

    60 fps in a video game is not just about the visual fidelity of the frames on the screen. It is also about control. You see, a game that is running at 60 fps is also more responsive than a 30 fps game. The time that a frame spends on screen determines how responsive it is. This is a big deal in any fast paced game. It is not as big a deal in a slower, turn based RPG, but for racing games, sports games, first person shooters, you WANT 60 fps or better because it also effects your performance in that game. Exceeding 60 is gravy, and can improve the response time even further. Even if the player cannot perceive the difference in the visual, they will FEEL the difference in how it plays. But make no mistake, many people can see a difference, too.

    To elaborate, trying to aim with a first person shooter game is more challenging at a low frame rate, because the aim will jump around more. Frame drops and stutter make this even worse. With 60 fps or better, this will be much smoother, and is a very noticeable thing to any gamer. A racing game will feel like it is in slow motion compared at 30 fps vs 60. 45 fps is NOT that good, either, because in most cases the screen will only display 30 fps. Most displays will only show specific frame rates, like 30, 60 or 90. So when a game runs at 45, the display only runs at 30. The displays that can adapt to these in between frame rates are expensive, thus only a serious gamer who quite likely has an expensive GPU to go with it will have one. But you want to stay above 60 for this reason.

    It can even go beyond control. Some video games will be "smarter" at higher frame rates. In many games, the frame rate also ties into the AI and physics. In a popular game I play, the AI is limited to how many actions it can take in a frame. 2 to be exact. Think of this as the AI's thought process. So the AI can only think of 2 things during a frame. If the game runs at 30 fps, the AI effectively only thinks half as much as it does at 60 fps. The game was designed for 60 fps. And this makes the AI much "dumber" and easier to exploit. (Sometimes players might purposely gimp their frame rate to get an advantage, lol.) There are also games where physics gets out of wack at different frame rates. On a laptop, I have seen some AI just standing around, because they are waiting for their turn to "think." So I shot them, lol. On my desktop, the difference is clear, as the AI will react to my movements more. Conversely, playing at very high frames per second in this game can make the AI quite deadly, but this can also make them more entertaining to engage in battle.

    So all of this stuff matters, and can be directly effected by frame rates. A movie would never have this problem. But...try watching a VR video at 30 FPS...lol, That can make people physically sick. For VR, you WANT 60, or even 90 fps. It has to be high and responsive.

    ...ahh 60 FPM, which was what was originally referred to as "showscan".  I saw demos of it at Expo '86.  Achieved a very real sense of depth without 3D trickery. Thngs like helicoptor rotors and propellors didn't have the annoying motion blur effect, wheels dodn't appear to be rotating backwards, and camera pans were incredibly smooth. The downside was the expense as it required using 60 percent more film per second, a very huge cost when talking 70 MM.

    Post edited by kyoto kid on
  • swordkensiaswordkensia Posts: 348
    edited September 2017

    Sorry ignore my previous post.

    S.K.

    Post edited by swordkensia on
  • FWIWFWIW Posts: 320

    What attached image?

  • ebergerlyebergerly Posts: 3,255
    edited September 2017

    Consider all of this:

    Using one 1080ti will use less electricity than two 1070s.

    One 1080ti has more VRAM than two 1070s.

    Two 1070s will have more total CUDA cores than one 1080ti, but you also have to factor that using two cards is not a full 100% boost. Plus it takes a bit longer to get a scene loaded on multiple GPUs.

    And right now, the cost of two 1070s is greater than a single 1080ti. And they do not offer better performance. IMO the answer is a no brainer.

    I wish we could re-direct our arguments away from the generic, somewhat irrelevant "more, better, faster" and towards the HOW MUCH more, better and faster, and whether that matters to a real person. 

    Your point boils down to " less + more + more + longer + greater = a no-brainer". I'm sorry, but generalities do not equal a no-brainer. 

    Just one example from your list...

    You say one 1080ti uses less electricity than two 1070's. That's probably true. From what I found, at load the 1070 draws something like 150 watts (which is about what I see on my system), and the 1080ti around 280 watts. So 2 x 1070 = 300 watts versus 280 watts. The difference is 20 watts. Fine. 

    However, even if we wanted to prove your point to the extreme, lets say the difference was a staggering 100 watts. In terms of the COST of that difference in actual $$, which is what's really important, consider that a typical cost of electricity in the US is around 10 cents for every kilowatt-hour. That means if you use 1,000 watts for one hour it costs you 10 cents.

    So using our exaggerated 100 watt difference, it takes 10 hours to draw 1,000 watt-hours. So every hour you run your system the cost difference is 1 cent.  

    So if you ran your two 1070's CONTINUOUSLY, AT FULL LOAD (continously doing renders or playing video games), 24 hours a day for 1 year (8,760 hours in a year), that would cost an additional $87 over the 1080ti. And that's assuming our exaggerated watt difference, plus it assumes we're running continuously for 1 year, which I doubt anyone here does.

    Practically, the cost difference over a year for most people is probably negligible. Maybe a few $$ at most.

    My only point is PLEASE, let's focus less on generalities which may have little real-world relevance, and more on actual numbers.  

     

    Post edited by ebergerly on
  • ebergerlyebergerly Posts: 3,255
    edited September 2017

    If you are not a gamer, it is really not wise to make this argument. 

    Apparently not. I'm kinda surprised there's so much involved in the importance of frame rates for video games. I figured it's just a bunch of guys running around with machine guns shooting people and blowing stuff up. 

    So would you agree that when you get up past 60 FPS, and let's say two cards you're comparing are both producing greater than 60 FPS. Saying card 1 is "better" because it gives you 118 FPS compared to the other that only gives 100 FPS, is card 1 really noticeably better, or is the difference irrelevant and not worth spending money on? 

    Post edited by ebergerly on
  • JamesJABJamesJAB Posts: 1,760
    ebergerly said:

    Consider all of this:

    Using one 1080ti will use less electricity than two 1070s.

    One 1080ti has more VRAM than two 1070s.

    Two 1070s will have more total CUDA cores than one 1080ti, but you also have to factor that using two cards is not a full 100% boost. Plus it takes a bit longer to get a scene loaded on multiple GPUs.

    And right now, the cost of two 1070s is greater than a single 1080ti. And they do not offer better performance. IMO the answer is a no brainer.

    I wish I could get you guys to re-direct your arguments away from the generic, somewhat irrelevant "more, better, faster" and towards the HOW MUCH more, better and faster, and whether that matters to a real person. 

    Your point boils down to " less + more + more + longer + greater = a no-brainer". I'm sorry, but generalities do not equal a no-brainer. 

    Just one example from your list...

    You say one 1080ti uses less electricity than two 1070's. That's probably true. From what I found, at load the 1070 draws something like 150 watts (which is about what I see on my system), and the 1080ti around 280 watts. So 2 x 1070 = 300 watts versus 280 watts. The difference is 20 watts. Fine. 

    However, even if we wanted to prove your point to the extreme, lets say the difference was a staggering 100 watts. In terms of the COST of that difference in actual $$, which is what's really important, consider that a typical cost of electricity in the US is around 10 cents for every kilowatt-hour. That means if you use 1,000 watts for one hour it costs you 10 cents.

    So using our exaggerated 100 watt difference, it takes 10 hours to draw 1,000 watt-hours. So every hour you run your system the cost difference is 1 cent.  

    So if you ran your two 1070's CONTINUOUSLY, 24 hours a day for 1 year (8,760 hours in a year), that would cost an additional $87 over the 1080ti. And that's assuming our exaggerated watt difference, plus it assumes we're running continuously for 1 year, which I doubt anyone here does.

    Practically, the cost difference over a year for most people is probably negligible. Maybe a few $$ at most.

    My only point is PLEASE, let's focus less on generalities which may have little real-world relevance, and more on actual numbers.  

     

    If you are going to harp on people for using generalities, then at least fact check yourself before stating your figures.

    • Accroding to Nvidia the GTX 1080 ti is rated to consume 250W,  and according to the GTX 1080 ti Founders Edition (exact refrence design card) review at tom's hardware it consumes 248.6W in their torture test (this should be the same as 100% load Iray render).  Most definately not the 280w that you generalized
    • Accroding to Nvidia the GTX 1070 is rated to consume 150W,  and according to the GTX 1070 review at tom's hardware the Founder Edition consumes 150W (180W for the MSI Gaming X version) in their test (This would mean 300W usage for two stock clock speed cards rendering in Iray)
    • So running two GTX 1070 cards at stock clock speeds will use 50W more while rendering than a single GTX 1080 ti.
    • $0.10 per Kwh... must be nice.... In Southern California, baseline is $0.16, over baseline is $0.25, and 400% over baseline is $0.31.  If you run your AC during the summer you are going past the 400% mark and paying $0.31 per Kwh... If you are running Iray renders then it's a no brainer that your AC will be pumping to keep your GPU from self destructing.
    • Another point to keep in mind, that extra 50W of power usage is generating an extra 50W of heat venting out of your computer that causes your AC unit to run a little longer with each cooling cycle.

     

    One more point to make... same as lots of other people.  The 11GB of VRAM on the GTX 1080 ti trumps everything.
    I have a 6GB GTX 1060 and OMG, as soon as I start playing around with scenes that include volumetric lighting effects, or lots of emissive surfaces, or HD morph characters set at high SubD levels for closeups or renders above 1920x1080.... I run out of VRAM really quick and the Render job dumps down to my dual Xeon CPUs consuming 130W each and easily taking 3 to 4 times as long on the render job.

    ** Done ranting **

  • ebergerlyebergerly Posts: 3,255
    edited September 2017

    JamesJAB, with all due respect, it seems like you missed the point where I allowed for up to a 100 watt difference. You seem to be saying the actual is 50 watts, and I allowed for twice that. And you seem to agree with the 2 x 1070 watt usage I had (300watts), and you're quibbling over my use of 280 watts for the 1080ti versus your value of 248.6. That's fine. Use whatever numbers you like. I already allowed for a huge difference in order to help prove his point. And using even those exaggerated differences I still came up with only a few dollars a year difference. You can triple that result to allow for higher electricity costs in different areas, but 3 times $3 is ony $9.

    Clearly, if you or anyone else want to re-do my analysis using different numbers I'd love to see it. If I'm wrong and the actual values are different I'd love to learn what's real. But your response seems to pretty much agree with my analysis. 

    Am I missing something? 

    Post edited by ebergerly on
  • Lets try this again. cheeky

    I am fortunate to have a three machine, mini render studio.

    Machine 1. runs a Titan Z + Titan X.

    Machine 2. runs a pair of 1070's.

    Machine 3. runs a 1080ti + Titan Black.

    In the image sample attached, I used machine 2 and 3 for the comparison test. As you can see the scene is an interior shot using a lot of reflective surfaces, Glass etc, and five mesh lights as light sources (some of which are off shot).

    On both machines the 1070 and 1080ti were set as the primary cards, also running the display. the secondary cards were disabled. Both machines run the same OS (Win7 64bit), have the same amount of ram (16gb) and run comparative motherboard and processor hardware.., actually machine two has better MB and processor than Machine 3.

    The scene was set to render for 2000 samples. and Optix Prime Acceleration was also enabled for both machines.The Architectural Sampler was enabled and rendering Quality was set to 5.

    Machine 2: 1070 Scene Load Time: 4 minutes, 30 seconds.

    Machine 3: 1080ti Scene Load Time: 3 minutes, 10 seconds.

    Machine 2: 1070 Render time to 2000 samples: 1 hour, 28 minutes, 19 seconds.

    Machine 3: 1080ti Render time to 2000 samples: 45 minutes, 41 seconds.

    This is an example of a processor and calculation intensive scene, where better core processing and core count clearly demonstrates a considerable saving in time.. Sure if you are rendering out scenes which typically take 20-30 minutes to render, then shaving off 6-10  minutes or so of time may not make much difference.., however if you are doing more intensive scenes then the figures clearly speak for themselves.

    Cheers,

    S.K.

    Scene sample render.png
    1920 x 1080 - 3M
  • FWIWFWIW Posts: 320

    Not really the point but I fricking love that picture S.K. 

  • ebergerlyebergerly Posts: 3,255

    Swordkensia, EXCELLENT !!! Nothing I love more than actual data !!! smiley Thanks very very much for the input.

    So it seems like you're getting a 50% improvement (45 minutes vs. 90 minutes) with a single 1080ti versus a single 1070 if I'm reading your results correctly (after the scene is loaded)? Which falls right in line with the many results that have been posted in the benchmark thread. 

  • Lets try this again. cheeky

    I am fortunate to have a three machine, mini render studio.

    Machine 1. runs a Titan Z + Titan X.

    Machine 2. runs a pair of 1070's.

    Machine 3. runs a 1080ti + Titan Black.

    In the image sample attached, I used machine 2 and 3 for the comparison test. As you can see the scene is an interior shot using a lot of reflective surfaces, Glass etc, and five mesh lights as light sources (some of which are off shot).

    On both machines the 1070 and 1080ti were set as the primary cards, also running the display. the secondary cards were disabled. Both machines run the same OS (Win7 64bit), have the same amount of ram (16gb) and run comparative motherboard and processor hardware.., actually machine two has better MB and processor than Machine 3.

    The scene was set to render for 2000 samples. and Optix Prime Acceleration was also enabled for both machines.The Architectural Sampler was enabled and rendering Quality was set to 5.

    Machine 2: 1070 Scene Load Time: 4 minutes, 30 seconds.

    Machine 3: 1080ti Scene Load Time: 3 minutes, 10 seconds.

    Machine 2: 1070 Render time to 2000 samples: 1 hour, 28 minutes, 19 seconds.

    Machine 3: 1080ti Render time to 2000 samples: 45 minutes, 41 seconds.

    This is an example of a processor and calculation intensive scene, where better core processing and core count clearly demonstrates a considerable saving in time.. Sure if you are rendering out scenes which typically take 20-30 minutes to render, then shaving off 6-10  minutes or so of time may not make much difference.., however if you are doing more intensive scenes then the figures clearly speak for themselves.

    Cheers,

    S.K.

    Thanks for that test, swordkensia!

    But first, are the CPUs the same, or close?  That seems to play a role. 

  • ebergerly said:

    Swordkensia, EXCELLENT !!! Nothing I love more than actual data !!! smiley Thanks very very much for the input.

    So it seems like you're getting a 50% improvement (45 minutes vs. 90 minutes) with a single 1080ti versus a single 1070 if I'm reading your results correctly (after the scene is loaded)? Which falls right in line with the many results that have been posted in the benchmark thread. 

    That is correct sir. :)

    as an aside for comparatives, the 1070, is only 5% slower than my Titan X(9 series). The 1080ti is within 10% of the Titan Z.(which is an utter beast)

    In my humble opinion before the 1070's became inflated in price due to the mining craze, it was without doubt the best card for price verses performance, and may still be., 

    @Winterflame I thank you.

    Cheers,

    S.K.

  • ebergerlyebergerly Posts: 3,255

    Perhaps that will put to rest the claims that the benchmark scene used for iray renders doesn't accurately reflect the render times for larger scenes. Once it's loaded into GPU, render time is render time regardless of scene size. At least that's what I *think* is the case. 

  • ebergerly said:

    Swordkensia, EXCELLENT !!! Nothing I love more than actual data !!! smiley Thanks very very much for the input.

    So it seems like you're getting a 50% improvement (45 minutes vs. 90 minutes) with a single 1080ti versus a single 1070 if I'm reading your results correctly (after the scene is loaded)? Which falls right in line with the many results that have been posted in the benchmark thread. 

    That is correct sir. :)

    as an aside for comparatives, the 1070, is only 5% slower than my Titan X(9 series). The 1080ti is within 10% of the Titan Z.(which is an utter beast)

    In my humble opinion before the 1070's became inflated in price due to the mining craze, it was without doubt the best card for price verses performance, and may still be., 

    @Winterflame I thank you.

    Cheers,

    S.K.

    Good to know - that confirms what I've suspected, but have only suspected as I and we here on the forums didn't have information to confirm.

    The 1080 Ti has almost twice the memory throughput as the 1070, so as long as it's not being bottlenecked by the PCI connection, it would load a scene into the card faster.

  • swordkensiaswordkensia Posts: 348
    edited September 2017

    The CPU in Machine 2 is an Intel core I7-3930K 6 core (12threads), running at 4.0 ghz (overclock).

    Machine 3 is running Intel Core i7-4470K 4 core (8 threads) running at 4.5ghz(overclock).

    But to be honest once the scene is in the GPU most of the heavy lifting is being done there, the CPU's are only being used to write back the calculations. for example task manager on Machine 2, displayed the cpu usage whilst rendering at 16% and only using six of the twelve cores and of the six only two were actually working at or around 50% of utilisation.., so processor impact is negligable.

    Cheers,

    S.K.

    Post edited by swordkensia on
  • ebergerlyebergerly Posts: 3,255

    I vote swordkensia gets the DAZ Forums award for most useful post in September 2017. And if there isn't an award we should make one smiley

    And the winner is...........

Sign In or Register to comment.