Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
OK...there is no smiley for a facepalm... :red: That will have to do!
IDL = Indirect Lighting ... but I my tortured mind I associated "IDL" with lighting photometric profile for some reason... My bad.
Lol..
No worries sir..!!!
I would still be interested to know what your core utalisation is for that scene though.
SK.
I did note earlier in this thread that I would occasionally see the CPUs drop to near zero in Performance Monitor. That was before this "render benchmark" experiment.
I'll probably do another set of renders, with and without IDL, while capturing the CPU counters. I'll likely turn down some of those settings (RoguePilot pointed out to me in a PM that they were needless anyway...just part of my continuing education) so that the render doesn't take so long.
It would be interesting to see if IDL can be linked to those CPU drops -- I had initially assumed that the L1 cache was being invalidated for some reason.
Hey Garstor,
did you try bench marking the Beast? (see this thread from the old forum)
The file is available here.
Would be interesting to see your time. ;-)
8 cores - 3.38
2:01 - 6 core / 12 thread, Intel 3930K running at @ 4.2 ghz. though the bucket size of 128 is not the best option for large multi core machines.
reducing the bucket size to 16 resulted in a time of:
1:32 as all cores were fully utilised for the full course of the render.
Peace,
S.K.
I finally captured a Performance Monitor trace during one of the renders of my scene (BTW, I'll definitely check out that benchmark render scene and post results here as soon as I can).
I took the advice from RoguePilot and experimented with dialing back some of the render settings. It made quite a difference in render time and very little difference in final image quality. That was a useful learning experience.
So that first "all-out" render took about 3.5 hours. Dialing it back a bit (Antialiasing "Good", Object Acc. 1 pixel, Shadow Acc. 2 pixels, Lighting Quality Excellent, Lighting Accuracy 1 pixel) took 3 hours & 4 minutes. Dialing back even further (Antialiasing "Good", Object Acc. 2 pixels, Shadow Acc. 2 pixels, Lighting Quality Good, Lighting Accuracy 4 pixels) dropped the time to 1 hour & 24 minutes.
What surprised me was the PerfMon results of that last render:
You can clearly see that some cores (specifically number 6 thru 11) hover around the 70% mark. The rest of the cores hover around 15%. The highlighted black line is the average of all the cores -- 21.14%
I suspect this difference stems from the implementation of the multi-threaded rendering code. For some reason it is not able to (or designed not to) use the cores evenly. The work loads are not created equal...
Edit: Trying to get the PerfMon capture to display...dammit...I give up...blasted forum software...
Swordkensia,
2:01??!!! Smo-kin!
My ASUS G-73 clocked at 7:15.......it's reasonably fast for me. Will try it on my Octane box later.
:)
I opened the file and went straight to the render room, no tweaks to the settings. Because it was set to produce a 400x400 image, the renderer was not able to use all the cores with the default render tile size of 128. So I got a time of 3:36.
I rendered again with a tile size of 48 and got a time of 1:42.
I rendered again with a tile size of 16 and got a time of 1:34.
Wow that's fast. I want one.
1:34!!!! Aus-some, mate! :-)
It will really shine when you start doing animations.
My Octane box did 6:02 with the defaults settings and tile size....better than I expected (vs my i7)
Didn't know (tile) size made a difference.
Will try it out. :)
5:40 after changing tile size from the default 128 to 16........not earth shattering, but
better.
Haven't tried over-clocking, etc yet.
Garstor,
how are you keeping your machine cool? Extra fans? Oversized heat sinks?
....in Texas in a heat wave! =O
Some of Carrara's features only will use one processor. I don't know what they are off-hand, but seem to recall that maybe physics was calculated with one processor, and there might have been a render setting or two....
After looking into overclocking, decided it wasn't worth the risk of over-heating.
Was happily surprised that the machine I built didn't meltdown when I turned it on,
that it still works fine after a year, and that it even beat the time of my ASUS G-73
that has an i7 and 8 cores (hyperthreaded) vs the Phenom II X4 955 in the home
built Octane box.
Garstor,
have fun with your new super computer....sounds like top of the line components.
And wait till it gets cold this winter......it'll be a great space heater. :cheese:
Garstor,
Would you mind doing a simple (though perhaps slow) test for me? I'm curious as to the speed up ratio for an AMD chip from single threaded to fully multithreaded. In preferences you can turn off the multithreaded rendering. What I would like to see is single threaded vs multithreaded for identical scene where it takes at least a minute (if not two or three) for the multithreaded render to complete. What I'm after is the scaling of the new AMD architecture with the Carrara render engine. My concern is each module (two cores) having only a single FPU and how much Carrara needs FPU resources.
Sorry for the late reply...I never got an email notifying of the thread update (the new forums were supposed to fix that...I guess it is one more thing to add to the long list of To Fix items that the Daz devs have).
Send me a PM with the details of your test and I'll see what I can come up with.
Yeah it has been hot lately. I'm embarassed to admit, these new boxes don't have many fans. The heat sinks seem capable for the time being and I do not intend to overclock.
That figures... ;)
I suppose that I could do some further testing with Process Explorer / Process Monitor to see where threads activate.
Now if only the day job would stop intruding with my hobby! :D