Adding to Cart…
![](/static/images/logo/daz-logo-main.png)
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
Hmmm, I think you might need to recheck part of your chart...
Maybe one of your fields swapped on the spreadsheet?
You have 1x GTX 1080 ti + 1x GTX 1070 listing as rendering faster than 2x GTX 1080 ti...
And you should probably verify that those Titan X numbers are not in fact Titan XP numbers.
The Titan X and The GTX 1080 ti have the same number of Cuda cores (256 less than the Titan XP). Also the Titan X has a slower clock speed and Slower VRAM than the GTX 1080 ti
I would love to see your luxmark benchmark - i bet it would really rock with the TR + GPU’s.
and furryball benchmark.
Here is mine: 4 minutes 26 sec.
GPU+CPU+Optix on.
CPU: i7 7700K
GPU: Inno3D GTX 1060 6 GB (Hynix) with default clock
RAM: 16 Gb DDR4 2400 mHz
Hello some people were interested in testing GTX 1070 ti so here we go
CPU 17-4700 ,32gb RAM. Win 8.1
GPU GTX 1070ti 2:30
Interesting! In case anyone is curious to compare, here are my results:
System: Intel i7 4790, 3.6 gHz, 32gb system RAM; OS: Win7 Home Premium; Graphics: 1080 TI, 1080 GTX.
Render time with both cards: 1:27
1080 GTX only: 3:01
1080 TI only: 2:13
Before each test I closed DAZ Studio to (presumably) clear the memory and provide a clean slate. All were rendered in iRay to 100% with Optix acceleration enabled.
[ edit/afterthoughts 'r' us dept.: I started each render in Texture Preview mode. Starting in iRay Preview gave a time of 1:16 with both cards running. CPU rendering was disabled in all cases. ]
Just letting you know, there are MANY, MANY, MANY "Titan X" styles of cards...
2015 GTX Titan X (The original, not made by NVidia, But has the GeForce brand.)
2016 Titan X (Maxwell, has no GTX on the ID. The one above is also a Maxwell, but slightly slower.)
2016 Titan X (Pascal, has no GTX and no actual "P" on the card, failed... marketing confusion.)
Titan Xp (2nd attempt to define "Pascal", which runs faster than "Titan X {Pascal}") {NOTE: They added the "p", as a small letter, because a famous youtuber branded the other one "Titan XP", also due to copyright avoidance with "Windows XP". Totally ignoring the confusion of "XP", in other use. It will not run on Windows-XP. Ironic!}
Titan Xp Founders Edition (Yea, more confusion... It's another "Pascal" card, but faster than the rest, not just by overclocking.)
The first three get confused as just being called "Titan X"
The last two get confused as just being called "Titan Xp"
None are the "same".
Oh, and there is the original "Titan", which some keep calling "Titan X". (Not here, but in other forums, when the "X" came out.) Those numbers would be horrible for rendering.
Latest benchmarks... Still waiting for Titan-V support... Keep in mind, new tests have the NEW resolution code, not the old code. Also, Pascal/Optix support now.
OS: Windows 10 (64-bit) {Virgin setup, Only has Daz3D and starter essentials.}
RAM: 64GB, 2666, DDR4
CPU: Core i9 7980XE, at stock speeds (16 cores/32 threads)
Drives: Samsung 960 Pro, M.2, 2TB
Cards used: "GTX Titan-X" (Maxwell), "Titan-Xp Collectors Jedi", "Titan-Xp Collectors Empire"
Test setup: CPU + Optix + all cards (Non-SLI), Default scene with IRAY-Viewport running. [41.72 seconds] Watts ~845
Same setup, with the "Ground" turned off. (It is unseen, and wasted processing in the demo.) [40.68 seconds] Watts ~840
Test setup: Default, with ground, no CPU [44.70 seconds] Watts ~710
Same setup, with the "Ground" turned off. [43.87 seconds] Watts ~708
Same setup, no "ground", CPU only [8 minutes 12.72 seconds] Watts ~298
NOTE: Still waiting for NVIDIA and Microsoft to fix the "Memory allocation/reservation" issue in "Windows 10"... (12 GiB total, 8.10227 GiB available) on every card. Instead of just the "display card". Which is a hard-allocation, which they "think" is a "suggestion", but it crashes at 8.2GiB loading, on all cards. Thus, it is a "hard-allocation", which does not resize, which is what a "suggestion" would do.
IRAY rend info : CUDA device 1 (TITAN Xp COLLECTORS EDITION): compute capability 6.1, 12 GiB total, 8.10227 GiB available, display attached
IRAY rend info : CUDA device 0 (TITAN Xp COLLECTORS EDITION): compute capability 6.1, 12 GiB total, 8.14166 GiB available
IRAY rend info : CUDA device 2 (GeForce GTX TITAN X): compute capability 5.2, 12 GiB total, 9.08547 GiB available
Can you give me the time of a single Titan Xp? I just got my Jedi Order edition and got 2 minutes and 7.58 seconds, but I am feeling it's too long
Awesome. Does that mean you're volunteering?![smiley smiley](https://www.daz3d.com/forums/plugins/ckeditor/js/ckeditor/plugins/smiley/images/regular_smile.png)
I wouldn't mind doing it. It just has to be limited to contain items that everybody gets with Daz. I'd probably steal a few of sickleyield's material balls (if she doesn't mind) since those shaders haven't changed. But the bigger question is if anybody would actually use the test I create. It would be pointless to create it if nobody uses it.
Speaking of...what clothing items are included with Genesis 8 essentials? Which clothes were included with dforce essentials (or were there any, I forget.) I propose two new tests. One test for Iray, and a second test for dforce. With dforce being so new, we have no clue on how it scales with hardware. So I believe a dforce test would be fantastic. Of course I could also apply dforce to items that Genesis 8 came with. Or we could go super simple and just use primitives for dforce testing. Its up in the air. But I really want a dforce benchmark.
My goal would be to have a test that takes a Titan about 6-10 minutes. So not totally crazy. But with longer render times, this should provide a better spread in averages and allow us to better gauge them. The test could be run by anyone, but is really intended for more powerful machines, like GTX 1060 and better. Everyone could still run the old test
This is what I came up with. There isn't anything but a basic top and underpants in G8 starters. So I fit the old dancer outfit to her. But I went through and tried to give the outfit more Iray settings than it had, like adding top coat settings. I did the same for her hair. There are no new textures. I added some dual lobe settings to the skin, and some other changes. The render is 800 by 1000, but it could be made smaller. What is really demanding in this scene is the darkness. That is exactly where most of the render time goes, as Iray tries to bounce what light it can get around the scene as many times as it can, leading to higher render times. The render shoots to 90% very fast, then 95%, but the last 5% take the vast majority of the total time. That is not uncommon, but I believe it happens more in darker images for the reason I described. If you add light to this cene, it will render drastically faster.
3847 iterations, 0.729s init, 1403.289s render. This is on a GTX 970. So a Titan could probably break 10 minutes or be close.
That's over 23 minutes! But it had hit 95% by 8 minutes, so it took 15 minutes just to finish the last 5%. So if the light was brighter, my guess it would take about 9 minutes.
What are your full specs, GPU driver, and Iray settings? Did you use the Iray Preview in the Viewport or not, as this makes a difference. Most people are using it, and using it should cut times because the scene is loaded. Did you restart the computer, sometimes it can even perform differently on different restarts (crazy, I know, but people have posted this.) Sometimes drivers mess up render speeds, game ready drivers in particular. In fact, my money would be on a crappy Nvidia game ready driver. The game ready drivers sometimes screw up Iray...badly. It is best to skip them and only do the main Nvidia updates.
Your first render of the day will probably be your fastest. After your system heats up, there might be throttling that will affect the performance. Try again when your computer is cold.
I'm wondering if you can't just take the Sickleyield scene and turn on caustic sampler and architectural sampler? That should do it I would think. So people can use the same scene, same small memory requirements, and increase render time from minutes to years.![smiley smiley](https://www.daz3d.com/forums/plugins/ckeditor/js/ckeditor/plugins/smiley/images/regular_smile.png)
That way everyone can play, independent of GPU VRAM and stuff
BTW, I think we've already shown that the benchmark times in this small scene seem to scale up to other, larger scenes. Myself and others have tried much larger scenes, and the % improvement between different GPU's seems to be the same as with this smaller scene.
Although I agree when you're getting down to real short times and faster GPU's it might be better to use a longer-to-render scene to account for loading and stuff.
Or just up the resolution?
I rendered the Sickleyield normally, and it went in 1:10. I then selected Architectural Sampler and it more than tripled to 4:15. Surprisingly, when I selected only the Caustics Sampler, it only took 1:15, about the same as normal. I guess there are no glass shaders in the scene, though I never checked.
Seems like the architectural sampler might do the trick. Just an easy check box and you're done.
I had suggested just upping the resolution before, too.
The reason I want to use G8 is two fold. G8 is Iray ready from the start, and uses a few more textures. Adding dual lobe settings was simple enough, and dual lobe did not exist when the original scene was made. So the scene with G8 is adding a couple Iray features the original lacks. The 2nd reason is that most Daz users are not using G2 so much now. Its like how video game benchmarks change games every couple years. Its not just that the new tech is pushing cards harder, most new benches are using popular high end games people may be playing.
The new scene is not really using much more VRAM, since the main difference is G8 instead of G2.
I didn't turn on the architectural sampler because most people are not using that feature. I don't believe it should be turned on just for the sake of jacking up render times, I'd prefer a scene that tests Iray in other ways. Plus we don't know what exact effects this setting truly has on hardware. For all we know it might be greatly enhanced by CPU (I'm not saying it is, but I don't know of many who have extensively tested this setting across different hardware.)
Not sure how many users have G8. Personally I have no need for it, and no interest in buying a whole new set of textures and wardrobe and so on. I'm assuming a lot of others are in the same boat, and I think the goal is to have as many players as possible so we get the broadest results. Why not just use G3?
And BTW, with the architectural rendering...the goal here is relative performance right, so we can compare between GPU's. So no matter what the rendering, if you turn off CPU and get times with the GPU's, then isn't that all we need?
I have an FX 8320 cpu and 32GB DDR3 RAM on Windows 10... The driver is the latest 390.77 Another curious thing is that I'm having a lot of unresponsiveness from my PC when rendering, which didn't happen with the other GPU (GTX 1070). I don't know if with the GPU switch there were some configurations of power management that got reset.
You're right, I got it in 1min 53s now, but there shouldn't be this much variance, should it? I mean, the cooling solution should prevent this amount of throttling... I'm guessing there is something about power saving
Cooling is complicated because every box is configured differently. It's impossible to engineer for every possible component configuration. But at least it seems you located where you have a performance deficiency and you can work from that.
Everyone, if they download the Genesis 8 Starter Essentials.
Are you monitoring the temps while running? If not, get something for that, like EVGA PrecisionX or MSI Afterburner, or similar. I use PrecisionX to control my fans. I have an aggressive fan curve that kicks in faster and harder than normal, and this prevents my GPU from ever throttling. I never break 60C even when Iray runs nonstop for hours. You want an aggressive fan curve because you will be running your GPU very hard for long periods of time. Plus it is much easier to replace a GPU fan than it is to replace a burned out GPU. That is key to maintaining faster renders. You must keep the temps of the card below its throttle temp. That should not be hard, unless your case has poor ventilation. If your rig itself is kind of old, it might not be well optimized for airflow. I physically cut into my case to create bigger holes for airflow. But my case is from 2002. I just could not bring myself to get another case whenever I rebuild. My case is a very lovely steel blue, with a window. It strikes a nice balance between bland and Mountain Dew Gamer ridiculous that seems to be lacking in modern cases.
Another thing to be sure of is that your powersupply is up to the task. The Titan can suck a lot more juice than other cards. It is possible the heat from the Titan is making other things hot, too. Again...ventilation.
That is very interesting system setup there. A mid range CPU from 2012 with a new Titan xp??? Most people would think you are completely insane. I love it! And though it doesn't always do it, the fact that you can hit 2 minute renders just like every other Titan in this thread is more proof of just how little a CPU matters to Iray when it comes to single GPU rendering.
I can flip that back at you and say that new users are less likely to have Genesis 2 installed. At any rate, Daz has stated that G8 is the fastest adopted Genesis model they have had, so that indicates G8 is doing pretty well. And it is included free with Daz. Even if you don't buy any G8 items, most people still have it installed just to play with it. Going back to gaming benchmarks, people tend to want the newest games in benchmarks, so why wouldn't we want the latest figure?
I just worry that a little used feature might turn up surprising results that aren't in line with how people really use Iray.
But either way, I don't really care. We just need to update the benchmark somehow, and it needs to be universal.
CPUs are very overrated nowadays, when even latest games don't require a lot of CPU power to be played with good fps... I myself have a 60Hz 1080p monitor, so this CPU is more than enough for any game I want to play. As far as rendering goes, it should be obvious that CPU doesn't matter in GPU only renders, if not by the scene loading times... And for the price I paid on my Titan Xp, I would get a CPU that would net me far less performance gain...
What intrigues me, though, is why am I losing so much responsiveness when I render, even when I switched back to the "old" GTX 1070 I used before, it remained the same...
Check to see if your CPU is participating in the render. All that about an overrated CPU, but your CPU does all the work except the rendering and if you have gpu + cpu checked, its doing some rendering too and your performance is suffering. Don't underate your CPU. It's still the workhorse in your system.
I totally understand that. Its just that there are some people who swear up and down you are "doing it wrong" if you do it any other way. I bet you money more than a few would laugh at you in a gaming forum if you posted your specs. At 1080p though, it probably doesn't matter. For Iray, it certainly doesn't matter. But there is a point to where it does matter, multiple GPU rendering is indeed impacted by CPU and motherboard.
Are you using both cards in the same system? If you are, then that explains the slow down. While CPU is not important for single GPU rendering, it can be an issue for multiple because you need more lanes to run extra cards, even you are not using them at the same time. This behavior has been documented, as a Xeon equipped PC with multiple Titans render faster than the same Titans in a PC equipped an i7. The lanes available make an impact. However, it was still faster than running a single GPU.
My PC can be a bit sluggish when rendering. Its not real bad, but it is certainly noticeable.
CPU was never checked... And rendering is by far the most important part on a... rendering program. If the scene takes longer to load, it will add 2 minutes maximum to the render time. If your GPU is bad, it will add hours of render time... Even if the viewport navigation was handled only by CPU, which it isn't, it would be better to invest on a GPU. The benchmark scene is here to prove it...
I'm just using the Titan Xp, as I will be selling the 1070 to help cover the costs ^^ My PC used to be a little sluggish when rendering too, but this level of lag is unbearable