Adding to Cart…
![](/static/images/logo/daz-logo-main.png)
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
Ah ok. Yeah that is a bit strange.
Either way, it would be great to get more tests on this so we can see if this is a real trend. There's a lot of things at play here. Is it really just as simple as more CPU cores? We have a test with a 1080+1070 with a Ryzen 1700 that also gets beat by this 1800x dual 1070 time. Not by much, but that still should not be the case. And it beats two 1080s. I noticed the dual 1080s are a laptop, are those downclocked from desktop varients? That may explain the difference as well.
To avoid confusion, everyone needs to verify they are doing tests the same way when they post their results. Like the Mythbusters used to say, we need consistant results. (I miss that show.)
And again, here is the link to this scene. You may need to right click the download button to download it, depending on how moody the Daz website is at the time. <.<
https://www.daz3d.com/gallery/#images/526361/
Also, @bluejaunte, your characters are probably the best models in the store, IMO. I have a few of them now. Whatever you are doing, is working extremely well. You have quickly become a PA I watch new releases for.
Oh, yes that may very well be the case. Laptops often have mobile variants that are quite a bit slower and less power hungry. Although two 1080 in laptop, what kind of laptop is that even? Never heard of such a thing.
Oh and thanks for the compliments!![smiley smiley](https://www.daz3d.com/forums/plugins/ckeditor/js/ckeditor/plugins/smiley/images/regular_smile.png)
I've seen a few dual 1080 laptops, they are beasts! I know the CUDA cores are not cut down. Pascal is not cut down for laptops, hence they no longer have the "M" moniker for mobility versions. A laptop 1080 is actually the full 1080 chip. But I do not know what the clock speeds are, and they may be clocked lower. The i7 in that laptop is cut down, it is a 4 core chip instead of a 6 core one, and it is clocked at only 2.7 Ghz.
outrider42, I have filled in the items above since it seems the first benchmark experiments between 1080 Ti and dual 1070's. I just ran the test in reverse order and got basically the same results wihin a few seconds on all three results. So I stand by that, at least on my build, two GTX 1070's are faster than one 1080 Ti overclocked and water cooled. The 1070's can run faster (overclocked) but in my box they are sandwhiched so close to each other (probably about 1/4" distance.). I use to run them at about +125 MHz on GPU, +375 MHz on Memory speed.
I believe you, and that you can repeat the results, that's even better. What are the clocks of the cards at stock? Many cards come factory overclocked, especially 3rd party gaming cards. I think it would be a good idea to show clocks on these cards. You gained like 40 seconds in one test on the 1080ti with just 100 mhz. You oc'd the memory, but I don't believe that was a big factor. Though you can certainly test that theory if you wish since I was wrong about pcie lanes.
The question we have is more about the other rigs you beat. What are the clocks on the dual 1080s in that laptop? I tried looking it up, the one and only review I found that listed any clock speeds stated they were base 1557 mhz. That model was slightly different, but if that is correct, that is ever slightly down from the 1607 Nvidia lists as the base of a founder's 1080. That's only 50 mhz though! That doesn't seem like enough to explain how they were beat by two 1070s. It is also possible that @tj_1ca9500b did not start the test with Iray preview active. That could very well explain it.
@sone said they have a 1070 and 1080. They say OC by Gigabyte, I assume both a OC'd a bit. They might even have the same 1070 you do. I found 1721 mhz for a 1080 in OC mode (not counting boost.) They had a Ryzen 1700. Their fastest test, and they said the VRAM was loaded so I guess that means the Iray preview was on, was 4:16. They were just a half second faster than your dual 1070s even though they have an OC'd 1080. So this test is even more interesting than the laptop test. What is the difference between a Ryzen 1700 and 1800x besides clock speed? Is the 1800x the key to why the dual 1070s are running almost exactly as fast, within the margin of error? This is very curious, and I'd love to know what is making your 1070s sing so well, or if there is something holding the others back a bit.
I'm hoping that someone runs these benchmarks on a dual EPYC system at some point. Sure, I can guesstimate using the Threadripper results, but it's not the same as an actual benchmark...
May be a long wait given the price of those
These are my results of stock runs, overclocking just the Memory, overclocking just the GPU, overclocking both for JUST the GTX 1080 Ti. It seems that overclocking the memory does more for render times than the GPU speed. FYI: I cannot seem to get over 100 to be stable even with increased core voltate, power limit's etc.
Titan X Pascal @2037 + Titan Xp @2062, stock memory speeds, custom water cooled, 39C each. GTX 970 used for display only.
Intel 2600K (4 core) @4GHz, 32GB DDR3
2 min 34.71s (outrider42's new scene)
Iray Starter Scene - 5.29 with CPU / GPU - i7 3770k / Titan (the first one) all on stock
Iray Bench 2018 - 13.42 with same.
Actually not as bad as I would've feared
Alienware 15 R3 (i7 7820HK @~4Ghz/OC1 Profile, GTX 1080 with Max-Q, 32GB DDR4-2400, PM981 1TB ) [HWiNFO attached]
Iray Starter scene: (Genesis 8)
(CPU: OFF) GPU & Optix: 8m 20s
It is fascinating to see the difference a few years makes. The OG Titan turned 5 in February. Thanks for posting.
I upgraded my graphics card yesterday from a 980ti to a 1080ti. Whilst I was waiting for the delivery I did some bench testing with the original scene, and then did some more today using the 2018 scene. In a few weeks time I plan to put the 980ti back into the pc, so I will post back results with dual GPU then.
System specs:
i7 5960X with 32GB RAM.
980ti clock core 1000 MHz
1080ti clock core 1595 MHz
Daz Studio version 4.9.4.122
Using the original benchmark scene:
No iray preview. OptiX Acceleration disabled. Fresh load of the scene before hitting Render.
980ti 5 minutes 44 seconds
980ti + i7 5960X 4 minutes 59 seconds
1080ti 3 minutes 26 seconds
1080ti + i7 5960X 3 minutes 22 seconds
No iray preview. OptiX Acceleration enabled. Fresh load of the scene before hitting Render.
1080ti + i7 5960X 2 minutes 8 seconds
From iray preview window. OptiX Acceleration enabled.
1080ti + i7 5960X 1 minutes 52 seconds
1080ti 1 minutes 52 seconds
Using the 2018 Render scene provided by Outrider 42:
Iray preview enabled. OptiX enabled. Optimisation for Speed.
1080ti 5 minutes 17.76 seconds
1080ti + i7 5960X 5 minutes 22.76 seconds
i7 5960X only 1 hours 4 minutes 19.30 seconds
System specs:
i7 8700k @ 4.4 Ghz x 6
24GB RAM @ 2666 Mhz
2 x GTX 1070 (MSI Armor)
Iray Bench 2018: (Genesis 8)
Test1: 2 x 1070 / OptiX enabled / Optimization: Speed / No OC 4:33
Test2: 2 x 1070 / OptiX enabled / Optimization: Speed / No OC 4:23 (Scene loaded in RAM)
Test3: 2 x 1070 + CPU / OptiX enabled / Optimization: Speed / No OC 4:18
iray benchmark 2018 by outrider42
5 minutes 3.86 seconds
dual 970 gtx
xeon 2696
IRAY Starter scene... (RAW)
2018 Benchmark test results... (RAW)
Windows 10 (64-bit) {WDDM v2.3}
1x i9 7980xe (CPU), 18 cores, 36 threads (Daz only uses 33 threads to render)
64GB DDR4 (RAM) Corsair Dominator Platimum DDR4 3000 (PC4 24000)
1x Titan-X (Maxwell) {Performace mode set to "Compute Power"}
2x Titan-Xp (Collectors edition) {Performace mode set to "Compute Power"}
xxx 2x Titan-V (Volta) [Not available for rendering in this version of Daz]
2TB mSATA (Samsung 960 PRO) {Boot and Daz}
4TB SATA (Samsung 860 PRO) {Daz Library}
FUNNY NOTE: I hit 78% convergence/done, after 4 seconds of rendering...
I would change a few things with the benchmark...
1: Turn off "Sun and Sky" rendering. Do "Scene only". You are inside a box. That is wasted processing of the outside world, counted towards convergance.
2: Turn off "Render ground". You are in a box, with a ground. That is wasted processing. Plus, the "ground", is rendering outside the box. See #3
3: The "box" you are using, is a primitive with two sides. Inside and outside are both being rendered. Due to sun and sky being outside and a ground which the box casts unseen shadows on. And the boxes "gloss" reflection settings.
4: You are rendering the box-material as a "Metalicity" profile, without the use of any of the metalicity settings. That slows it down, rendering nothing, (Values of 0 are still processed, since you select Metallicity as the shader type. Well, that is the default "catch-all" shader.)
5: You do not say what settings to put for the "Texture threshold". Defaults are 512 and 1024. Mine are commonly set to 2048-5000 for rendering, so it doesn't compress the textures.
6: Bloom is a wasteful post-processing, attempted to be done inline, while rendering. NVIDIA is horrible at processing those "novelty" filters.
7: You have a group of "Photometric fill lights" which is empty, left-over from ???
8: You have Smooth ON for a cube with all 90-degree angles set to 89.0, and round corners across materials ON, with no value. Both of those are wasted processing attempts with virtual 0 values. As opposed to being "skipped" when you select OFF.
9: Fingernails, for some reason were 50% transparent, and thus, glowing, rendering the inside of the Gen8 model.
10: Eye-moisture has all sorts of reflections and gloss but is 100% transparent (not rendered in the scene, but being processed)
11: Lips have 0 for "glossy", but have all the settings and an image-map for gloss. Needs SOME moisture...
P.S. We need to stop using that model... She has herpies... Some kind of reddness under her nose, on her lips, at the crease. There is also a horrible "ring aournd her eyes", in the GEN8 model. Not sure if all GEN8 models have that, or just her. Like someone cut around her eyes with a razor-blade, and the wound is not healed. Hands look awesome though, if you fix the nails so they are not 50% transparent so you are seeing the inside of her boneless body, which glows due to SSS.
I have a similar setup, but my render times are much longer. Running Daz on Mac Pro.
CPU only - 2x Xeon X5680 @ 3.33Ghz (24GB REG ECC DDR3)
Total Rendering Time: 31 minutes 12.67 seconds
Just got a Titan Black on my PC. Big improvement over my 12 core (2x) Xeon X5680 @ 3.33Ghz
Starter Scene.
GPU Only + OptiX Prime
Total Rendering Time: 4 minutes 26.67 seconds
I'm not sure if this was posted, but I came across an Iray benchmark that includes not only the Titan V, but also the DGX-1. Rather than factoring time, the cards are rated on megapaths per second. This gives you a good indicator of relative performance, and if true, the Titan V is a real beast for Iray, absolutely destroying everything that has preceded it. (At least before the gold plated Quadros were released last month.) So that $3K does get you something...if Daz updates to support it. What I really like is that they retested older GPUs for this test, they did not just rely on the test results from older versions of Iray. I am going to make a big list of these here.
This test is using the Iray 2017.1.2 SDK, and all GPUs shown were tested on this same version.
TITAN V 12.07
Tesla V100 11.1
Quadro GP100 7.89
TITAN X (Pascal) 5.4
GeForce GTX 1080 Ti 5.3
Quadro P6000 5.26
GeForce GTX 1070 Ti 4
Quadro M6000 4
GeForce GTX 1080 3.7
TITAN X 3.7
GeForce GTX 780 Ti 3.3
Quadro P4000 3.09
TITAN Black 2.9
GeForce GTX 980 2.9
GeForce GTX TITAN 2.8
Quadro M5000 2.7
Tesla K10 2.1
Tesla K20 1.9
Quadro M4000 1.8
GeForce GTX 750 Ti 1.1
So how much does $60K get you?
DGX-1 58.91
Quadro VCA 30.62
Another interesting result is how the 1070ti beats the 1080 outright. At first this may seem wrong, but that is not the case. My thinking is that the 1070ti has a more updated version of CUDA for Iray since it released so much later than the 1080 did. That is not unprecedented, as I posted the CUDA version of different GPU series in an earlier post. There are several cases where a newer card has a newer CUDA, even though it is part of the same series. In any case, this really does make the 1070ti to be a tremendous value. And in the one bench we have seen on sickleyield's scene with a 1070ti, it beat a 1080. So this may well be true!
Here is the source. They also list a number of cloud serves, too, so if you wish to see those click the link.
https://www.migenius.com/products/nvidia-iray/iray-2017-1-benchmarks
Here is mine (with original SickleYield scene): 2 minutes 59 sec.
GPU+CPU+Optix on.
CPU: i7 7700K
GPU: MSI GTX 980 ti 6 GB Armor 2X with default clock
RAM: 16 Gb DDR4 2400 mHz
In that test all 4 Titans will be running @16x each as there are a total of 80 lanes across those 2 Xeons. If you ran that in a single 40 lane CPU with 4 cards render time would be increased as data throughput to the cards would be halved causing a higher render time over all. I have tested this theory and it is the case. I had a 16 lane CPU runnnig 2 980tis @ 8x each, render time was slower than 1 980ti running alone @ full 16x.
I posted some results above several weeks ago when I upgraded from a 980ti to a 1080ti. This week I finally got enough free time to fit a bigger power supply and put the 980ti back in the machine.
Using the 2018 Render scene provided by Outrider 42:
Iray preview enabled. OptiX enabled. Optimisation for Speed.
1080ti 5 minutes 17.76 seconds
1080ti+980ti 3 minutes 12.13 seconds
Overall, I'm pleased with the difference on the benchtest, but I need to try some larger scenes to see the difference.
A pretty solid 40% increase in speed. That should be noticeable on just about everything. Though of course you only see that extra boost when the scene fits in the 980ti's 6GB memory, minus whatever Windows takes from it (so maybe ~5.)
What is your new CPU, and what is the old one? If you upgraded to a CPU with more lanes, then odds are your CPU also has more and faster cores than your old one, which would better explain the difference. In order to properly understand what is happening, we need more information. Just telling us you have more lanes is not a definitive result. It is also possible your previous system bottlenecked your GPUs in other ways. That's why we need people posting to give us more information on system spec.
There is an easy way to test this. Cover half of the GPU pins with an insulating material like the picture below and run the test again. If there is a difference in speed, then you are on to something.
Anybody who has such a system can test this and see if it makes any difference. I'd be interested in seeing this.
My results surprised me a bit as the CPU + GPUs. render time is almost the same as the GPUs only render time.
CPU: i7-6950 (40 lanes), Overclocked 36%, independent cooling loop.
32 Gig RAM
4 x EVGA Hydrocopper GTX-1080, overclocked to 2000 MHz, 16,8,8,8 lanes, independent cooling loop
Starter Scene:
GPUs + OptiX = 2018-05-30 14:41:52.865 Total Rendering Time: 58.68 seconds (load and render), 45.43 seconds render loaded scene, 05000 iterations
GPUs + CPU + OPTIX = 2018-05-30 14:59:03.121 Total Rendering Time: 57.49 seconds (load and render), 45.34 seconds render loaded scene,05000 iterations
This is a good comparison test (CPU or no CPU) but I had some backgound processes running that slowed the benchmark slightly. I left the system configered as normal for my daily operations.
MSI GE63-Raider-RGB-8R Notebook (I7 8750H, GTX 1070, Performance profile setting)
Original Scene:
GPU + OPTIX = Total Rendering Time: 4 minutes 3.33 seconds
GPU + CPU + OPTIX = Total Rendering Time: 3 minutes 7.23 seconds
Outrider42 Scene:
GPU + OPTIX = Total Rendering Time: 9 minutes 11.83 seconds
GPU + CPU + OPTIX = Total Rendering Time: 8 minutes 15.67 seconds
CPU : Ryzen 2700x ( PB2 & XFR2 ONLY )
RAM : 16G(8*2) @ 3400MHZ
VGA : EVGA 1070TI 8G
IRAY Starter scene:
CPU ONLY = 17 minutes 39 seconds
CPU + GPU = 2 minutes 52 seconds
GPU ONLY = 2 minutes 52 seconds
1080 Ti (x3) + Titan Xp
Old scene:
New scene:
GPU clocks start to throttle near the end; hitting a thermal ceiling with air cooling
1080 Ti
1800x Ryzen
32GB RAM
Only GPU: 3.29
GPU and CPU: 3.01
GPU Optix: 2.09
GPU and CPU Optix: 1.48
Beta 4.11:
GPU Optix: 1.51
GPU Optix + Post Denoiser 8: 1.51
So 4.11 is a bit faster than 4.10 for me.
Paparspace virtual machine GPU+ P4000 configuration, only GPU:
Optix Off : 6 minutes 18.64 seconds 4951 iterations
Optix On : 3 minutes 58.62 seconds 4986 iterations