Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
Oh? Yeah if that's the case, I didn't know. Quick Google seems to confirm it:
https://www.gamersnexus.net/guides/2488-pci-e-3-x8-vs-x16-performance-impact-on-gpus
Yeah, I think that the importance of PCI lanes is yet another one of those common myths that keeps popping up in tech circles. It seems reasonable that more is better, but a lot of people just leave it there and assume that's the answer. As GN (and others) show, it's pretty much irrelevant.
This corresponds with a previous iRay study that showed that cpu participation becomes less and less influencial on the render as you add more gpu's to the mix. You can observe this if you redo your test with just one gpu active. You should see that cpu participation will decrease the render times. My personal test with dual zeons and only one 1080ti resulted with cpu contribution causing a 17% decrease in render times. That number will decrease as I add more gpu's to the render.
I am impressed by the Threadripper. Your 16 core chip was only 16 seconds slower than my 28 cores. This holds great promise for the 32 core chip I have my eyes on.... I'm really looking forward to Epyc prices coming down.
Lets see...
Cards rendered obver last hours. So results will be a little lower since they are hot.
1 x GTX1080TI:
2017-10-15 21:02:33.624 Finished Rendering
2017-10-15 21:02:33.674 Total Rendering Time: 1 minutes 57.85 seconds
1 x GTX 1080TI + 16 Core Threadripper
2017-10-15 21:04:56.331 Finished Rendering
2017-10-15 21:04:56.410 Total Rendering Time: 1 minutes 39.59 seconds
2 x GTX1080TI:
2017-10-15 21:06:25.664 Finished Rendering
2017-10-15 21:06:25.741 Total Rendering Time: 59.96 seconds
2 x GTX 1080TI + 16 Core Threadripper
2017-10-15 21:07:58.279 Finished Rendering
2017-10-15 21:07:58.333 Total Rendering Time: 56.82 seconds
3 x GTX1080TI:
2017-10-15 21:09:03.950 Finished Rendering
2017-10-15 21:09:04.098 Total Rendering Time: 42.14 seconds
3 x GTX 1080TI + 16 Core Threadripper
2017-10-15 21:10:08.587 Finished Rendering
2017-10-15 21:10:08.667 Total Rendering Time: 41.17
4 x GTX1080TI:
2017-10-15 21:11:20.368 Finished Rendering
2017-10-15 21:11:20.465 Total Rendering Time: 33.1 seconds
4 x GTX 1080TI + 16 Core Threadripper
2017-10-15 21:12:11.197 Finished Rendering
2017-10-15 21:12:11.269 Total Rendering Time: 33.8 seconds
4 x GTX 1080TI + 16 Core Threadripper WITHOUT OptiX
2017-10-15 21:13:31.895 Finished Rendering
2017-10-15 21:13:32.004 Total Rendering Time: 52.50 seconds
WOW... the result without OptiX is really bad.
What can i say ... it seems drzap is totaly right. The more GPUs you use, the less important is the CPU. However .. i could imagine a scenario in which somebody is building up a new render rig. Athe beginning, with only one GPU installed, the Threadripper is a good help. Afterwaards it gets quiet useless.
It wasn't my study so the credit should go to www.pugetsystems.com, they do extensive testing on their 3D workstations. They show that if you are building a rig exclusively for iRay, a Threadripper, let alone a dual Xeon workstation is probably overkill. But the study was directed toward companies and studios. Very few individuals will build a computer just for one piece of software.
Well, of course. As I said before, the faster the total render (ie, the more GPU's) the less chance the much slower CPU's have to participate. Do the math like I previously did on your newest results and you'll see that the CPU percentage of the total scene is always like 5%, compared to a faster and faster total GPU result. So 5% of less and less is, well, less and less.
BTW, from a software developer perspective, one way to view it is like this:
Imagine you have an image to render. Say 2 million pixels. And you want to hand it off to 5 people to render it simultaneously. One of them is slower than molasses, and the other 4 are real fast.
The clock starts, and the slow guy plods along while the fast guys are zipping thru their part. And after 30 seconds the 4 fast guys say "hey, we're finished with our part", so they jump on whatever is left of the slow guy's part and finish it.
The slow guy will always work at the same speed, so the more fast guys then the less time the slow guy can contribute. No need for extensive testing to determine that.
BTW, I've done multi-threaded image processing in C# using stuff like "lockbits", and I think that's probaby a decent picture of how the software does it's thing. Though I've never done GPU stuff...
Doesn't help that Iray is a GPU renderer. Why we are even talking about CPU's in here is a little beyond me
Correction: Octane is a GPU renderer. Redshift is a GPU renderer. Iray is a hybrid renderer. You need not have a gpu at all to use iRay. So a cpu is still revelent, especially if one is a poweruser and have heavy scenes that fall out of the gpu vram. In that case, it seems the Threadrippers can render as fast as entry level Nvidia gpu's which is very encouraging.
Right, except that Redshift and Octane both have out of core rendering which eliminates the VRAM limitation in the first place. I call Iray a GPU renderer because it's just too slow when it runs on the CPU to be more than an afterthought.
Actually, if you compare it to a true GPU renderer, DAZ iRay is just plain slow. Anyway you look at it, it is gimped.
My oversimplified view of when powerful CPU's are relevant is something like this:
Is that true? I didn't know that. Did someone do tests on DAZ scenes using "true" renderers? Maybe that's in our future, going to a faster renderer in Studio. That would be nice
1. Most real 3D software is multithreaded. I haven't come across any software that is otherwise. Many of those programs are also gpu assisted, so it usually isn't as simple as either cpu or gpu. A balanced system is probably necessary for a 3D artist to get the most out of their workflow. This has nothing to do with Daz Studio, of course.
Daz Studio is not professional software and its implementation of Nvidia iRay is suitable for its users. I doubt they will make resources to upgrade significantly the renderer beyond what it does now. After all, its free software. Can't complain too much about free.
I'm just curious where the "gimped" came from. I've been thinking about moving stuff into Blender for Cycles or the new Eevee, hoping maybe it's faster, but if you've seen tests with other renderers I'd be interested to chase them down.
I'm certainly not aware that those who have access to iray server report that it is faster than the Iray in DS, which is the standard nVidia code base (Daz doesn't write it, just integrates it with DS). It may not have certain features enabled but it isn't limited, so far as I am aware, in thsoe which are enabled.
As I said, Daz iRay is just a subset of the full-featured Nvidia iRay. It lacks capabilities that many artists need or want. It's slow, crude, and lacks versatility. This is why many artists, including myself, choose to render in other software. I think of it as the real iRay's younger cousin. But I don't see many of the stills rendering core of DS complaining much about the renderer, so I think it's good enough for them. As far as Blender, I don't have any experience. Eevee is looking good but its still in beta and I'm expecting many bugs when it's finally released. Looking forward to seeing how it develops.
Yeah, I wasn't referring to Daz iRay specifically as being slow, but iRay as a renderer. Comparing it to a true gpu renderer is like comparing a typical sports car to a Formula 1.
Like Octane? I'm not aware it's faster than Iray, never tried though. Redshift is faster mainly because it's a biased renderer.
It would be nice if you could refer to actual numbers rather than general statements. This is the first time I've heard claims that iray is really slow, so if there's some data to support that it would be nice to have.
"Like Octane? I'm not aware it's faster than Iray, never tried though. Redshift is faster mainly because it's a biased renderer."
Octane is like a stallion (though I've never used the DS plugin). Blazing fast. Redshift is even faster. I also love Furryball because it plays so nice with my favorite renderer, Arnold. But gpu renderers are peculiar. They have some irksome limitations. I compared them with a Formula 1 car for a reason. Race cars are great on smooth racetracks, but don't try to go off road. Its the same with gpu renderers. When they are compatible, they are great but often they are incompatible with a 3D effect or workflow. Biased or unbiased doesn't much matter to me as long as I can get the look I want.
"It would be nice if you could refer to actual numbers rather than general statements. This is the first time I've heard claims that iray is really slow, so if there's some data to support that it would be nice to have. "
Raw numbers rarely tell the real story when it comes to things. Features, compatibilities, and workflow issues are most important. A better way to choose software is to identify what your current workflow is lacking and choose a tool that better fills your needs. If you don't have something that you need, you will know it. And if a piece of software helps you to create better, then no number will convince you of anything.
Well, if renderer X renders the same scene 80% faster than D|S iray, that's a pretty good story.
But if you don't have any numbers that's fine. "Stallion" is good enough. Thanks.
"Well, if renderer X renders the same scene 80% faster than D|S iray, that's a pretty good story.
But if you don't have any numbers that's fine. "Stallion" is good enough. Thanks. "
A renderer is just one part of the software. A small part. In DS, you only have two choices, iRay or 3dlight, so this is academic for you. If you want to transfer your scenes outside of DS, workflow issues will far outweigh your renderer choices. Most people concentrate on their pipeline before they decide upon a renderer. For DS users, this is pretty much irrelevant, which is why you won't see many numbers comparing anything to Daz Studio.
For the majority of DS users, the additional complexity of using third party render engines is why Iray has become so popular, despite the DAZ implementation not having as many features as the full version has. And even then I'm not sure there are plugins for other applications that fully implement Iray either.
"...And even then I'm not sure there are plugins for other applications that fully implement Iray either."
That's a curious statement, I'm not sure what you mean. There are iRay plugins for C4D, Rhino, Maya and 3dsMax. Until recently, it was one of the renderers that shipped with Max. They are all full implementations as far as I know.
Yes, those ones implement most, if not all of the features currently included in Iray, but what about software like Allegorithmic's Substance suite? The Iray implementation there doesn't include, to my knowledge, support for VR like some of the others do, among other things.
Oh, yeah. I doubt Substance Painter needs full iRay features. Each party takes what they need. To be fair, I doubt most DS users would make much use of the full iRay package, though I would love to see camera and scene motion blur.
Taking a page out of OZ-84's book, here's mine (OptiX on, all cores enabled, CPU rendering @ 100% usage):
(GPU Only) Titan Xp (SLI)
2017-10-15 20:58:51.942 Finished Rendering
2017-10-15 20:58:51.973 Total Rendering Time: 57.64 seconds
(CPU Only) i9-7920X (Dodeca Core):
2017-10-15 21:12:34.838 Finished Rendering
2017-10-15 21:12:34.873 Total Rendering Time: 11 minutes 17.86 seconds
(CPU and GPU):
2017-10-15 21:15:36.843 Finished Rendering
2017-10-15 21:15:36.878 Total Rendering Time: 54.51 seconds
So, as anyone can see, I gained a few seconds of performance with everything enabled, otherwise, the CPU, as powerful as mine is, is irrelevant when using graphics cards like a Titan Xp (or two, in my case).
Btw, SLI disabled:
2017-10-15 21:43:31.680 Finished Rendering
2017-10-15 21:43:31.711 Total Rendering Time: 57.13 seconds
I was under the impression that disabling SLI would yield notable gains. In this case, at least, that's not to be.
I have a question, can you help me?
My PC is i7 3770, gtx 1070 and 16GB DDR3
I was wondering to upgrade my plataform to a i7 8700 or a ryzen r7 1700 (and 16GB DDR4).
The question is, will I have much more performace rendering with the GPU ?
Thanks
I'm an Intel nerd, so emotionally, I'd go with the 8700 (especially as benchmarks suggest it'll be better for gaming). As far as Iray performance, however, the 1070 is what will benefit your system the most.