Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
So, a upgrade today won't give me a relevant performace? Should I keep my i7 3770 for some more time and save money?
Not that a CPU upgrade won't help, but not in the same way that an NVIDIA GPU will. The 1070 is what's going to really rock the boat for Iray renders. Me, however, I'd still get the 8700K, sell the 1070, and pick up the 1070 Ti (though I'm not sure how your budget looks).
Unfortunately, my budget doesn't allow me to change both now.
The main question for me is if I change my i7 3770 will I have more performance rendering with my 1070.
No, the difference will be neglibile. For tangible performance gains, you'll need to either add or change your graphics card.
Thank you for the help!
Yeah Jack nailed it.
A couple of things I'd be interested in...
While PCIe lanes primarily affect initial loading times, but there is also a DMI/SATA bottleneck that's encountered every time there is a screen refresh and especially when that r.png gets written to the temp folder.
If you have a 10 minute render lying round can you compare it when you change the "min. samples update" from default 1 to the maximum 100.
Do you get any time saving?
The other thing is a benchmark of the simulation speeds.
I used the demo scene of the "simple sheet". Resolution doesn't matter to the simulation but the GPU doing other things like 'Iray Viewport mode' obviously will.
My fastest time was with viewport set to wire-frame and camera pointing at an empty part of the scene. lol
750 ti - 1m 5s
Ok so in contrast to my previous Bench. This is my new Benchmark using both the GTX 1080 Ti, with a GTX 1070.
It cut about 25 seconds on the previous benchmark. Considering its a 2 minute render. Cutting off 25 seconds is quite a lot.
So, this verifies that crossing 1080 Ti's with 1070's does work, and does improve Bench's
OK, for those who might be curious how a dual 1080 laptop does with this test:
MSI GT83VR, with i7-6820 HK, dual 1080 (SLI) (8 gb each):
First pass: 1:40
Second pass: 1:25
With CPU enabled I've acutally noticed a drop in render speed. not sure what the deal is but yeah...
Basicly if I have all 3 checked, it goes from 1:25 to about 1:30-35. Which I'm no expert but if I were to diagnose with my limited knowlage I would say somewhere along the rendering lines its rendering old information instead of new. Like the GPU's are doing their job and the information is sent out, and then the CPU is taking the image the GPU rendered out and rendering the same image. So it just slows the proccess down while making no new information. I can tell by HWMonitor that the CPU is working. It is doing something because the cores heat up... But what ever its doing has a null effect... Which brings me to believe its rending an iteration thats already been rendered.
Yes, with my new 1080ti together with a 1070 my render times are improved something like 60-65% over just a 1070 alone. Not sure if that matches your results...I got a bit confused trying to decode your two images.
2 min. 4 sec.(Yay!) on my new rig with a 1080ti, 32 GB RAM, i7-7700K @ 4.20GHz, MSI Z270-A PRO MB, Windows 10 Pro. The iteration it reached was 4,825 at 95.06%. Probably would have taken all night on my previous PC.
For this test I have both CPU and GPU checked under Photoreal Devices and Interactive Devices, and OptiX Prime Acceleration on. Being that this is my first Nvidia card, I'm not sure what settings to use, and I've got to leave so I can't run any more tests this afternoon. If someone who has a similar PC or is familiar enough with hardware can let me know if I should expect to get better results with different settings, that would be great. Otherwise I'll experiment and get back to you later this week.
I'm still setting up the PC too so I haven't been able to play with it.
dawnblade that's exactly what others, including myself, have gotten with a 1080ti, 2 minutes. And about 3 minutes with a 1070.
Below is a summary I made of results posted in this thread.
System: Windows 10 Pro 64-bit, Intel Core i7 2700k 4.3 MHz, 16GB DDR3 1600 MHz, NVIDIA GeForce GTX 1080 Ti (EVGA)
Total Rendering Time with OptiX Prime Acceleration and GPU only: 1 minutes 52.69 seconds
Total Rendering Time with OptiX Prime Acceleration, GPU and CPU enabled: 1 minutes 57.24 seconds
Total Rendering Time without OptiX Prime Acceleration and GPU only: 3 minutes 11.18 seconds
Nvidia GeForce GTX 1080ti, Win 10 Pro, 32GB DDR4 2400 RAM, i7-7700K @ 4.20GHz, MSI Z270-A PRO MB
CPU and GPU, OptiX Prime Acceleration on: 2 min. 4 sec. 4,825 iterations at 95.06%.
CPU and GPU, OptiX Prime Acceleration off: 3 min. 20 sec. 4,825 iterations at 95.07%.
GPU only, OptiX Prime Acceleration on: 2 min. 4 sec. 4,816 iterations at 95.07%.
GPU only, OptiX Prime Acceleration off: 3 min. 23 sec. 4,798 iterations at 95.02%.
just my 1070 Total Rendering Time: 2 minutes 45.53 seconds
1070 + 960 2 minutes 2.17 seconds
One thing I noticed mt 1070 is now running ~6 hotter now(only 61, so not a huge issue), probably due to more restricted airflow, and absorbing some heat from the 960 under it.
Hoping the gain is a bit more apparent on a real scene render lol.
Keep an eye on the temperture though. Mine was getting pretty warm...
I've decided to unplug it in fear of destroying the 1070. I'll just save up some money and buy another 1080Ti and Crossfire proper.
You can track your hardware tempertures with this open source program. https://www.cpuid.com/softwares/hwmonitor.html
Anything above 90C° is unhealthy. Keep that in mind. If you see thiings reaching 90C° or higher your asking for hardware failure. The laws of physics prohitbit it. The melting point of plasic is 100C°
What card do you have? I have never seen mine go above 66 so far, either my 960 or 1070, during rendering or gaming. I usually get one with dual fans though, that probably helps some, along with my case fan setup.
Some scenes go fast, others not so much....
i7-7700K 4 core overclocked to 4.66 GHz
RAM: 32 GB
Windows 10 64
GTX 1080 Ti Founders Edition
GPU Memory 11.0 GB
Using SickleYield's reference scene:
FIRST RUN:
GPU Only + Optix:
2017-11-20 18:00:12.898 Total Rendering Time: 2 minutes 10.11 seconds
GPU + CPU + Optix:
2017-11-20 18:14:29.229 Total Rendering Time: 2 minutes 11.41 seconds
SECOND RUN:
GPU Only + Optix:
2017-11-20 18:55:20.349 Total Rendering Time: 2 minutes 9.92 seconds
GPU + CPU + Optix:
2017-11-20 19:00:45.443 Total Rendering Time: 2 minutes 17.41 seconds
i9-7900X @ 4.3 GHz, Win 10
Titan Xp + GTX 1080 Ti Founders Edition + GTX 1080 Ti EVGA FTW3:
Optix, GPU only: Total Rendering Time: 51.81 seconds
2nd pass with scene preloaded in vram: Total Rendering Time: 40.88 seconds
Preload with CPU: Total Rendering Time: 40.89 seconds
Titan Xp only: Total Rendering Time: 2 minutes 2.16 seconds
1080 Ti FE only: Total Rendering Time: 2 minutes 14.51 seconds
1080 Ti EVGA FTW3 only: Total Rendering Time: 2 minutes 12.11 seconds
I just upgraded the GPU in my laptop (Mobile Workstation)
Went from a Nvidia Quadro K5000M 4GB (similar to the GTX 680M) to a Nvidia Geforce GTX 980M 8GB
Here is my benchmark results:
Nvidia Geforce GTX 980M 8GB
GPU only with OptixPrime enabled.
5 minutes 19.55 seconds
Held full boost clock speed @ 75c for the whole render.
My shiny new 1080Ti arrives tomorrow so I thought I'd run the test on my 1050Ti before I pull it out and retire it, and it came in at 8m45s. I will post an update tomorrow once I'm up and running.
(The 1050Ti isn't a bad card for the price, to be fair, and I didn't mind the long rendering times -- that's what batch jobs were invented for -- but the 4GB of RAM was getting to be a severe limiting factor and I was having to make too many compromises and do too much post processing and compositing. *sigh* It seems like only yesterday I was happy with DKBTrace on my trusty 512Kb A500, and here I am having stumped up the premium for the Ti because the 8Gb on the vanilla 1080 could be cutting it a bit fine. I can't decide if "progress" is good or bad!)
Finally got my Threadripper setup
Asus Zenith Extreme motherboard
TR4 1950x
4 x GTX 1070 AERO 8G OC
128Gb Ram
Having some problems with my Wacom probably due to Win10 update so all I did was one render using cpu and all 4 gpus the total time was 1 min 16 sec
Wow, Robert, something doesn't look right. Earlier in this thread someone with 2-GTX 1070's reported 1 minute 45 seconds, and you're getting 1 minute 16 seconds with four 1070's?
And a 1080ti plus a 1070 was reported at 16 seconds less than you, at 1 minute. Hard to imagine that a 108ti plus a 1070 outperforms 4 x 1070's....
You should try running the benchmark using just the GTX 1070 cards and see what your result is.
I've noticed sometimes with very powerful GPUs/multiple GPUs if you add the CPU into the render cluster it can slow the whole thing down a little.
Yeah my orginal setup with only two 1070's was only 30 sec slower
I intend to do this but need to get the wacom tablet problem figured out as I found that it's actually opening two instances at once sometimes Last nigt after I posted I went to shut down Studio and found that there was a second render behind the one that I posted the times for
OK took the cpu out of the loop straight gpu render 50.78 seconds