Speeding Iray renders.

I like to produce portraits which are Photoreal(ish) but found my Iray renders were taking 40+ hours.
I now use DAZ Studio Iray HDR Outdoor Environments for both lighting and background, which have already cut my rendering time by up to 90% as compared to my own poor lighting & background efforts. So I was down to about 4 hours prior to this. Incredibly useful especially if you find the background acceptable.
In an experiment to further reduce rendering times I took 3 renders all with exactly the same scene and exactly the same setings except as detailed below; I did not invent this technique, I read about it after a Google search.
I set up a well lit Portrait scene of a subject with a human skin containing plenty of detail.
1. Long side of render 2000 pixels rendered for 2 hours 40 minutes
2. Long side of render increased to 3000 pixels (short side increased proportionately) and rendered for 1 hour 20 minutes
3. Long side of render 4000 pixels (short side increased proportionately) and rendered for 55 minutes
Using iSmartphoto app I observed the 3 renders on my 27inch 5k retina screen flicking back an forth from one render to the next using the scroll wheel on the mouse.
A. Sitting back I noted that hue, saturation and lightness were exactly the same in each case.
B. Sitting back I could see no difference in level of detail - in fact I could see nothing to tell the three renders apart.
C. With my eye about 3 inches from the screen there was a small but clearly observable changes in the detail, however I could not, hard as I tried, say for sure which is the most photoreal, therefore I believe that the differences are not significant, especially as no-one who has normal sight and is in their right mind would sit that close in normal viewing.
if you are concerned about the time your renders are taking, you may find either or both of these ideas helpful.
Comments
Can you please upload examples of these images so that we can determine for ourselves the success of this technique. it sounds quite intriguing and it might even work in a broad range of scenarios. Please let us see how it works in action.
This actually isn't a surprise. I posted a tip a while back from Buro Bewegt:
'Quicktip: rendering even faster in iray'
http://buerobewegt.com/quicktip-rendering-even-faster-in-iray/
Clever indeed. There is a difference however, and with a little bit of a sharpening filter on the resized image I think they could indeed look perfectly matched. Worth looking into. Thanks to both of you for this.
Here we are.
Mouth open so we can see how shaded areas work.
NOTE: I have had to reduce quality to 95% in each case to get below posting limit.
What do you think?
I have just tried 8000 pixels & 40 minutes, but at only 14 iterations that produced a bad result with tones not properly graduated.
I suspect that there is a minimum number of iterations & if you drop below that the image becomes sort of posterised.
Need to to determine that minimum number.
Compelling!!!!!
Why does it work?
I'm not sure, but my suspicion is that the early iterations provide the biggest increases in image quality, with each successive iteration providing a slightly smaller jump in image quality. Yet each iteration takes a similar length of time.
By utilising this approach we get the full benefit of the early big jumps in image quality, and avoid the time consuming later smaller jumps. It works so long as we don't fall below ?X? number of iterations.
Comments anyone?
That link talks about it.
It, meaning "oversampling".
You don't need every photon through the lens to resolve an image. You need enough for a good sample, as in statistics and such.
By oversampling, you are getting enough signal to create the image you need. It might not be enough for 100% resolution at your 2x image size, but enough at the intended size.
And it's the tail ends that take the most time to resolve, like a normal distribution wave I guess. The tails, immediately after 0% and before 100%, tend to be longer, but everything in the middle carries the most area.
By focusing the renderer on the fat middle, you can cut the overall length needed to bake your image.
That sounds right.
It is not surprising. With renderers like Iray most of the noise comes under the form of 1 pixel fireflies. When you are downsampling 2x2 -> 1x1 you are averaging the firefly pixel with 3 other non-firefly ones; moreover you are not using HDRI data and thefore the firefly value cannot go beyong 255,255,255 so the final result is quite decent.
Another technique, which I often used in Cycles, is to render several different images with changing random seed for the internal engine and then composite the results using the same algorithms employed for astronomical image processing; I fear that this route isn't applicable to Iray since I saw no sign of random number seed either in Studio or in Iray documentation.
Ok so where are the pixel settings?
We’ve created a pointer page that shows various threads on Iray that have been very helpful. You might want to check these pages for the topic you're starting. Rather than start new Iray pages, please check these sources first since it keeps the places to look for information simplified. These are also pages where many Iray users have been discussing Iray topics and so may be much more helpful to you. When there are dozens of threads, one forgets where they saw what and information may be posted that is very helpful but gets missed.
http://www.daz3d.com/forums/discussion/56788/
Under 'General' in the Render Settings tab. You just increase the render size.