Adding to Cart…
![](/static/images/logo/daz-logo-main.png)
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
My only PC :)
Windows 7 Pro x64
i7-4790 4GHz (8 cores)
GTX750 2GB OC Edition
16GB DDR3 1600MHz RAM
Complete Scene to 90%
CPU - 21m 22s
GPU - 8m 23s
Both - 7m 2s
Scene less spheres 8 & 9 to 90%
CPU - 3m 50s
GPU - 1m 47s
Both - 1m 26s
Interestingly I had an issue with fireflies only on the edited scene, they came and went in the complete scene, possibly a greater number of iray iterations?
That looks very much like the free Sponza model. I think it's at ShareCG and someone (from here?) did a .daz scene load for it.
In fact you used it here.... http://simonjm.deviantart.com/art/Walk-in-sunlight-206930382 tho you didn't add a link to the model :(
I thought I'd repeat my render test with a Film ISO setting of 200 and a shutter speed of 1/256, thus simulating a faster film. I was hoping this would also result in a faster render - not so lucky! The render took just as long as for the ISO 100, 1/128 test, and there was no discernible difference in the image quality either.
So I am wondering what the purpose of the ISO, F/Stop and Shutter Speed parameters is seeing as the effects they would normally engender have to be fashioned by other means anyway.
My results:
Intel Core i7-3610QM 2.30Ghz x8
12 GB RAM
NVIDIA GEFORCE GTX 670M 3GB
Windows 7 Home 64-bit
Full Scene
CPU Only Render Time to 90%: 24m19s
GPU Only Render Time to 90%: 10m28s
Both Render Time to 90%: 7m44s
Spheres 8 and 9 Removed
CPU Only Render Time to 90%: 3m40s
GPU Only Render Time to 90%: 2m17s
Both Render Time to 90%: 1m46s
They can be used as exposure settings just like on a real camera, so can lighten or darken a scene overall for instance, but will only under some circumstances speed up, or slow down render times. For instance in a dark room, a higher film speed if it brightens the room enough, should show a reduction in render times. If you totally blow out a scene with light, like mostly a white scene, it will render super fast. :) You most likely won't wind up with anything useful. ROTFL!
OK--rebench...Not going to re-do CPU-only, that didn't change.
CPU&GPU;: Approx 5-6 Min
GPU-only: approx 6.5 Min
What changed? Buh-bye GTX 550 Ti... Huuullooo GTX 970...
Congratulations!
Finally had a chance to really run the benchmark tests. Not impressive compared to what some people get, but I'm okay with what my 4 year old machine can do.
specs:
AMD Phenom II X6 1100T Black Edition 3.3Ghz
Nvidia GTX 560 Ti 1GB
16 GB RAM
dual monitors, one at 1920x1080, one at 1280x1024
Windows 7 Pro
times:
GPU + CPU - full image
90% 5:08
100% 12:57
GPU only - full scene
90% 7:50
100% 21:06
-minus sphere 8 & 9
90% 1:52
100% 17:59
CPU only - minus sphere 8 & 9
90% 4:48
100% 1:02:53
- full scene
90% 26:03
still at 90% at 40:44, gave up, I need to go to bed. ;)
conclusion: Not as fast as some people's impressive builds, but not too bad. The time beats Lux hands down with both CPU and GPU doing their thing. Can't wait till I have some real time to play with this shiny new toy!
And by the way, as always while rendering, I had my poor machine doing several other things as well. It might improve times if I didn't have the browser open, and wasn't running Celtx, or the random photo widget, or the random desktop backgrounds, or... you get the picture. My poor computer is used to getting shoved beyond what it should reasonably be expected to do.
Gread thread! Here's my results:
GeForce GTX 780 ti (2880 CUDA cores)
Intel i7-4770K CPU @ 3.5 GHz
GPU Only: 5 min 53 sec
GPU+CPU: 5 min 24 sec
Congratulations!
Thanks! Did some price matching from an accepted online source, at a local branch of a blue-and-yellow-big-box-store. :) Got a bit of a deal on the super-super-clocked version! It's sweetly humming away in my case!
using GPU only, took 9mins 6 secs on my rig, not trying other options as THAT was too slow for my liking, I do animations and use faster choices.
intel core i5-2500 CPU @ 3,30GH
16.0 GB RAM
GeForce GTX 760
Update! Got me a GTX 970 4gb card :cheese:
Previous results: Desktop= I7, 3770 3.40 GHz 4 Core - 90% = 20 min. and finished in 51 min.
New ones: didn't see the 90% but finished in 6 min 54 seconds.
Whoo hoo! LOL
Now to figure out how to actually make good renders instead of just fast ones. :red:
On my system, it took 1 minute, 45 seconds to get to 90% and 3 minutes, 40 seconds to get to 100%. I did not remove any spheres, nor change any other aspects of the scene file or its render options.
I confirmed that indeed CPU and both GPUs were employed. My system went to 100% utilization during the render and my screen movements got really sluggish.
AMD FX-8350, ASUS M5A97, 32GB Ram.
Cache and temp in 4GB DRAM-Drive. No Fake ram (aka no swapfile).
GeForce 8600GT 512MB VDDR (CUDA, what's that, lol.)
Total Rendering Time: 1 hours 56.47 seconds
The render maxed out on iterations (5000 samples) before it completed the 95% convergence, as hinted by my screen-cap. Yes, I added a zero to the max time, as I am on CPU only.
I should have had more faith in my computer I guess. :-/
So the billion dollar question is. Dose "Architectural Sampler" and/or "Caustic Sampler" improve CPU rendering? They are off in this bench by default. And dose "Optix Prime Acceleration" do anything on CPU only mode? Sounds like a chance to do more testing.
Architectural Sampler and Caustic Sampler are for specific cases and should only be used on a case by case basis. For example an interior scene lit mostly by exterior sunlight, or looking for super accurate caustics. These are used by Architects for Previz and Jewelry store advertisements. Render time wise they are not cheap to activate. (The Splash screen with those on went 17+ hours and wasn't done, with those off, took around 2 hours on a system with 2 x K6000.)
GPU Only
.log
Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 0 (GeForce GTX 770): 5000 iterations, 24.014s init, 636.832s render
Total Rendering Time: 11 minutes 3.55 seconds
Morning y'all. As I'm on CPU only, and I was curious how some settings effected render times. I set the max samples of the original test scene to one-thousand, and recorded the times it took to get 1,000 iterations done.
I have no idea what OptiX Acceleration is, tho apparently if your on an AMD or the Graphics card is shall we say 'incapable', it probably is best to leave it off. I had it checked, and my GPU, hoping in vain, that the graphics card would contribute something, That was a mistake.
I also wanted to see the Bench scene in a tad more detail, so I set it up to run as I slept. That's how I do most things. I work on scenes while I'm awake doing lots of small test renders. Then if it is ready, I'll let it run at high quality as I sleep. Like SY, I have short patience with my computers, when I want to be using them, lol.
I let this one run with the time and sample limits essentially off, so I could capture the settings not put in the log-file (Daz3d Pleas add them, pleas, pleas, pleas). I also decided that Fiery Genesis looked a lot better with “Crush Blacks” off (0.00) in this scene. The room tho, needs to be allot bigger to get that ambiance of endless space back. Something to try when I go to bed tonight.
AMD FX-8350, ASUS M5A97, 32GB Ram.
Cache and temp in 4GB Dram-Drive. No Fake ram (swapfile).
GeForce 8600GT (CUDA, what's that, lol.)
* For the sake of reporting benchmark times, I highly advise that you don’t change any settings when the scene is loaded. That way we get an accurate idea of what a computer (CPU and/or GPU) can actually do. This is a 'Benchmark', lets not be fudging the results y'all.
Architectural Sampler and Caustic Sampler are for specific cases and should only be used on a case by case basis. For example an interior scene lit mostly by exterior sunlight, or looking for super accurate caustics. These are used by Architects for Previz and Jewelry store advertisements. Render time wise they are not cheap to activate. (The Splash screen with those on went 17+ hours and wasn't done, with those off, took around 2 hours on a system with 2 x K6000.)
Wow, now I don't felt bad at all about my light fixture test run, lol. The man page only appeared to 'List' the stuff in the render tab (no definitions for them or anything. Just a list.), so I was not sure if that was like 'Shadow maps' vs ray-trace or not. Good to know. Thanks!
I had shut off everything except Daz, and set the limit to 1k samples to look at times. It looks like AMD or Something somewhere in my computer, dose not like "OptiX Prime Acceleration" at all. added nearly 2 minutes per 1k iterations with SY's test scene at stock settings. That combined with instance Optimization, Memory/Speed had some really curious results.
Preliminary test Truncated times, (AMD FX-8350, Win7 64bit, CPU only)
12 minutes per 1k iterations. instance Optimation (Speed), OptiX (on)
12 minutes per 1k iterations. instance Optimation (Memory), OptiX (on)
11 minutes per 1k iterations. instance Optimation (Memory), OptiX (off)
9 minutes per 1k iterations. instance Optimation (Speed), OptiX (off)
"O", my notepad window was to narrow. :red:
I have the same setup, almost, GTX 780 6GB RAM, MacPro2012 with 10.10.2, and I've discovered today after quite an extensive research that I cannot render the Bree Texture with GPU. Try this:
Set preview-mode to OpenGL Textures (to prevent IRay to load the Bree texture in the preview.
- Load G2F - Set the texture to V6 Belle (or any other texture except the default Bree one.
- Render, check logs of GPU Render works
- Now set the texture to Bree
- Render (and you will see the memory errors on the GPU). I bet that there is an error in some of the textures that the normal jpeg-reader programs doesn't choke on, but, when the texture is to be downloaded to IRay, it looks like it will need a Gazillion Bytes, and it fails.
Please do test as you have the same setup as I do, and things like this can be card related (how things are implemented in the card or the driver).
I've got a similarly old system Intel Core i7-930, but with one difference. The 930 defaults to a 2.8Ghz clock, but I have mine running at 3.8Ghz with a fairly beefy air cooler (Cooler Masters Hyper 212 EVO). I could push it over 4Ghz, but I wanted to keep temps in the 70's under full load so I backed it off a little.
Anyway, CPU only - Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CPU (8 threads): 5000 iterations, 18.658s init, 2660.251s render. 44 Minutes 20 seconds.
I've been "borrowing" the GTX 980 out of my gaming machine while I decide which GPU to put in my art machine. The 980 does:
Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 0 (GeForce GTX 980): 5000 iterations, 19.855s init, 276.719s render. 4 minutes 36 seconds.
This is with the newer 4.8.0.9 build and sphere's 8 and 9 hidden.
EDIT:
Forgot to mention that GTX 980 is the EVGA Superclocked version running at the stock speed it shipped at: 1266Mhz. It's supposed to Boost to 1367Mhz (according to the GPU-Z info), but watched the sensors, it ramped up to 1404 and pretty much stayed there the entire time. That may change on longer renders, though.
Reading back through this it makes me sound like some sort of overclocking maniac, but I rarely do it and I usually don't buy factory overclocked cards either... things just happened to work out that way this time :)
EDIT 2:
Went ahead and ran the full scene (sphere 8 and 9 visible) on the 980 by itself just for completion as it might make it easier to compare scores against other cards.
Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 0 (GeForce GTX 980): 5000 iterations, 19.783s init, 305.965s render
Finished Rendering
Total Rendering Time: 5 minutes 27.33 seconds
Intel i7 4960x Hex core 3.6Ghz, Nvidia GTX Titan 6GB, Windows 7.
Ran at the preload settings and dimensions.
CPU & GPU - 5 minutes 10 seconds
GPU only - 6 minutes 12 seconds
CPU only - 23 minutes 21 seconds
CPU & GPU with OptiX Prime Acceleration ticked - 3 minutes 54 seconds
Guessing I won't be turning the GPU off very often :)
dminut, nice score. I noticed the new Beta version 4.8.0.9 did make allot of stuff faster, others the same. The time was around 50 minutes, tho it's not in the log now? "O" well I'll run the test again later. Shaving about ten minutes off of a render in CPU only mode just doesn't seem right to me, unless there was a drastic change to the 3delight interface from the last version (4.8.0.4).
I'm currently digging up info on Iray compatible CUDA cards, apparently there is a version of CUDA's that doesn't work at all with Iray, others do, and so on.
I'm sure I'm not the only one to share plug-ins between editions of DS4. Has anyone tried Iray in one of the earlier editions?
I've got a similarly old system Intel Core i7-930, but with one difference. The 930 defaults to a 2.8Ghz clock, but I have mine running at 3.8Ghz with a fairly beefy air cooler (Cooler Masters Hyper 212 EVO). I could push it over 4Ghz, but I wanted to keep temps in the 70's under full load so I backed it off a little.
Anyway, CPU only - Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CPU (8 threads): 5000 iterations, 18.658s init, 2660.251s render. 44 Minutes 20 seconds.
I've been "borrowing" the GTX 980 out of my gaming machine while I decide which GPU to put in my art machine. The 980 does:
Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 0 (GeForce GTX 980): 5000 iterations, 19.855s init, 276.719s render. 4 minutes 36 seconds.
This is with the newer 4.8.0.9 build and sphere's 8 and 9 hidden.I will point out that overclocking with rendering is usually a bad combination. (Regardless of cooler.) It has a tendency to cause BSOD crashes, especially with onger renders and CPU's develop hot spots that are above crash threshold.
Note Factory overclocked, like most Video Cards arrive is fine, the cooling is designed for the temperature.
While all true, there are different degrees and different situations with different processors. In my case, I ran 24 hours with Prime95 under full load without crashing or getting numerical errors in the prime calculations. Many tweaks had to be made to get it to that point, so it's definitely not for the faint of heart :) And while it's not liquid cooling, the heatsink I used is huge (it uses an 120mm fan and can take a 2nd one if I needed it). It doesn't hurt that the i7-920 and 930 are really good overclockers.
I originally did it to speed up my 3Delight renders after waffling on upgrading to an 8-core Haswell-E. Now, I'm glad I held off as a $600 Gtx 980 would be a much larger improvement in speed than the 3 grand I was looking at for the new system. I don't even use the CPU for my normal renders and tick only the GPU box now... not much point in adding the CPU when it would only save a minute or so anyway :)
My only concern now is that I'm not sure how limiting 4GB of ram on a GPU is going to be. The Titan X is tempting, but I'm not sure if I'd just be wasting my money or not. With my current scenes, nothing gets over 2.6GB usage on the 980 so I'm probably ok. I'm just not sure what a 4GB Daz scene would look like.
I'm wondering the same thing... and actually testing at the moment. I sent a 1.2GB scene plus 12 people and hit 3.2gb. However Iray from what I'm seeing is superb at handling texture images. It really does seem to load a shared map only one time. It loves tiling. So this scene is pretty rough on OpenGL in texture mode. Pretty sluggish moving about on the GTX660s. It sort of presses the limits of scenes I like to use, but yet I didn't blow out the VRAM. But, again, this is going to come down to the textures. In theory, one outfit with a lot of huge maps could blow it out.
I'm thinking of at some point adding a Titan X, just for those occasions that will certainly happen somewhere along the line, where 4 is not enough and then for most scenes, let the work horses be 980s with the Titan. I keep watching to see if something blows my CUDA core theory up. So far it seems to be the number of cores = a very linear gain in speed regardless of card specs.
I'm wondering the same thing... and actually testing at the moment. I sent a 1.2GB scene plus 12 people and hit 3.2gb. However Iray from what I'm seeing is superb at handling texture images. It really does seem to load a shared map only one time. It loves tiling. So this scene is pretty rough on OpenGL in texture mode. Pretty sluggish moving about on the GTX660s. It sort of presses the limits of scenes I like to use, but yet I didn't blow out the VRAM. But, again, this is going to come down to the textures. In theory, one outfit with a lot of huge maps could blow it out.
I'm thinking of at some point adding a Titan X, just for those occasions that will certainly happen somewhere along the line, where 4 is not enough and then for most scenes, let the work horses be 980s with the Titan. I keep watching to see if something blows my CUDA core theory up. So far it seems to be the number of cores = a very linear gain in speed regardless of card specs. I asked the same exact question, given my potent 32bit experience :coolhmm:
How many unique generation 6 HD figures, cloths and hair, can you cram into 4GB?
DAZ_Spooky... http://www.daz3d.com/forums/viewreply/785791/
to prevent cross-posting, lol.
It looks like version 4.8.0.9 is a tad faster then the last version (4.8.0.4).
4.8.0.4, Total Rendering Time: 1 hours 56.47 seconds
Total Rendering Time: 1 hours 56.47 seconds
4.8.0.9
1.0 IRAY rend info : CPU (8 threads): 5000 iterations, 24.298s init, 3281.361s render
Total Rendering Time: 55 minutes 7.93 seconds
about 89% and change converged.
I can think of two reasons for the improvement. The removal of diagnostic code from the Studio-to-3delight interface, or further optimization to that interface code. Either way, I'm happy with the results.
I didn't run a test with spheres removed the first time, as I wanted to know how my computer would fare with figures in scenes, not just static props, lol. The same this time as well, tho now I'm curious. dminut had done a test on a system rather close to what I have. I have eight cores in this computer with an Integer-Unit for each of them, however they only have four Floating-Point-Units to share between all of them (Not quite 4+4 Hyper-threading, tho close). I will have to try the test again, without the SSS spheres (8 and 9).
"O" FYI, this CPU is at stock 4.0GHz ALL the time, I disabled that boost thing (day one), for similar concerns voiced thus far. I love massive heat-sinks with lots of pipes, tho I don't do overclocking. I learned my lesson years ago with "Low Noise Amplifier" designs, in another life (1980s).
AMD FX-8350, 32GB Ram, GeForce 8600GT (512MB), SSD Boot drive(C), SSD Programs drive (D), eight data drives, cache & temp in 4GB Ram-Drive.
(EDIT)
Now this is curious. While making the SSS spheres vanish from view in the scene, it dose NOT decrease the render time as considerably as I had thought. Also, the ONE factor, that differs considerably between having SSS and not in the scene, is NOT included in the log file. Percent Converged. It's all good tho, I'm having fun rendering stuff.
No sphere 8 and 9.
1.0 IRAY rend info : CPU (8 threads): 5000 iterations, 24.135s init, 3046.625s render
Total Rendering Time: 51 minutes 12.69 seconds
about 92% converged.
(EDIT2)
Good point noise.gate.mike, All three of these tests were dune with CPU, GPU, and “OptiX Prime Acceleration” checked off just to keep them consistent. forgot to mention that.
The GPU is only CUDA 1.1, so it did nothing, if only slowing things down a clock-cycle or so initializing the render. It stayed at 0% load the entire time.
CPU: Intel i7 4790k 4.0Ghz
GPU: nVidia GTX 970 4G
___________________________________
Default preload settings and dimensions.
TESTS:
1_ CPU + GPU: 5 min 41 sec
2_ CPU + GPU with OptiX Accel: 3 min 51 sec
3_ GPU Only: 6 min 13 sec
4_ GPU Only with OptiX Accel: 4 min 04 sec
5_ CPU Only (without spheres 8 & 9): 30 min 04 sec
6_ CPU Only with OptiX Accel (without spheres 8 & 9): 31 min 53 sec
EDIT: I'm not sure about TEST 5 & 6... i was browsing in Chrome at the same time.