Does it really help to render in higher resolution and then resize?
I read the tip that about rendering faster in higher res and then downsize the image to the desired size.
Well, I have done so and usually render about 3 times the size.
Today I tried to render 6 times the size, just to see what I get.
Well, see for yourself.
Here is the test:
- Desired output = 3 MP 2000x1500
- Always rendering for 30 minutes
3 MP for 30 min. = about 2000 iterations
10 MP for 30 min. = about 700 iterations
19 MP for 30 min. = about 300 iterations
The top shows a cropped comparison between all three images, downsized but otherwise unaltered.
The bottom shows a cropped comparison of all three, but all images have been denoised (with the same amount) before downsizing.
Now, who can tell me which version is which?
(The 3 MP version has not been downsized, obviously)
Comments
All you're really doing is making details that expose unfinished noise in unconverged renders too small to be seen but the downsize needs to be substantial from the original resolution rendered at so the noise radii in that render is reduced enough so the resampling filter pixel radii is large enough to be eliminated by a downsizing with resampling filter. The effect is a blur and a size reduction that eliminates the ability to see the blur created clearly.
The upside is that rendering huge at 8K and downsizing to 2K for only 300 iterations is much faster than rendering to typical convergence conditions at 2K. Does 300 iterations reduce noise radii enough at 8K to disappear at 2K? It will depend on the scene's characteristics, maybe, maybe not. It's complex enough that whole AI machine networks have been devoted to do it reliably.
If you CPU render and are trying to make a comic book or an animation it makes sense, but for just various 'painting' type stills? I wouldn't bother. Also, if you have an nVidia GPU that can do iRay such extra work needed to do the resizing will take more time than just letting the GPU render to convergence for most scenes.
I do render in iray with a 2060 RTX.
What do you mean by 'extra work needing to do the resizing'?
I do resizing and denoising after the render is done with separate software.
BTW,
A = 10MP
B = 19MP
C = 3MP
Very frequently I get one pixel fireflies which can take a geologic age to get rid of.
Reducing image size by only 50% removes these almost every time.
So I'm convinced it's a good idea in most cases. It might not be perfect, but I can't think of any serious cases where double size makes anything worse.
Are you just resizing the image? What software? How does it do resizing?
I got very good results scaling images down to get rid of fireflies just using The Gimp, and the Cubic algorithm. Those one-pixel fireflies like @Oso3D mentioned are particularly well dealt with.
Oso3D said that more concisely and politely than whatever I was writing
1. Your image is very small, hard to see the details there, I'm not even sure what you rendered, a patch of skin?
2. Resizing also goes very well if you do some corrections on the original image (hide some hair/pokethrough, whatever else). It allows you to not make those corrections perfect as by resizing it eliminates some of the "errors"
3. I have images that even at 100% completion are still sometimes not perfect. Resizing there goes very well with that.
4. I would never resize an image that went to less than 90% completion. That's just what I look for in renders. I usually render the next resolution (full HD if I want HD, QHD or UHD if I want FHD).
I dont use this technique. A larger image takes longer to render. A 2k image is 4 times larger than a 1k image and a 4k image is 16 times larger than a 1k image. So instead of rendering a larger image, I just render at the size I want and let it run according to my settings to reach convergence.
If you dont want to use the denoiser in Daz Studio, you can use the same Nvidia denoiser or the Intel denoiser as a stand alone app after the image is finished - https://taosoft.dk/software/freeware/dnden/
If the pixel-sized spots aren't too bad, another option is the despeckle filter in Photoshop. Duplicate your base layer and then apply the despeckle filter so you can blend it back to get rid of the spots you need to. I already render at ~5,000px on the smallest side...going much larger than that in hopes of downsampling to clear up spots isn't much of an option while still staying on the GPU.
Using subject matter with some detail may be more enlightening:
Top: 512x512, 956 iterations in 72 seconds
Middle: 512x512, 900 iterations + denoising in 75 seconds
Bottom: 1536x1536, 100 iterations + denoising in 76 seconds (resized to 512x512 using bicubic interpolation)
Rendering a much higher resolution image helps the denoiser perform much better and yield a much higher quality image for the same amount of render time. The only downside is that rendering larger requires more VRAM.
- Greg
An 8k image is not double the size of a 4k one. It is 4 times.
4k = 3840x2160 pixels. 8,294,400 pixels total
8k = 7680x4320. 33,170,600 total.
@algovincian So it's down to the filtering radius/ratio, a double size render is effectivly using half the filtering radius, plus a little bit more with the resize interpolation. What filtering are you using? Mitchel at 1.0?
Don't know what filtering the denoising uses but could you redo the 1st one with lower filtering radius, guestimate what the 3rd one effectively is, I think triangle is bicubic if I remember correctly..
I can't even render as large as I want to render without running out of vram on my 2080 super lol, let alone attempting to render at double or more the size I want the end result at. Probably works great if you are just rendering for like image galleries on the internets.
All 3 renders in my examples used the default Pixel Filter in DS (Gaussian with a radius of 1.5). I didn't save the scene and don't feel like running them all again, but if you run the tests yourself, I believe you'll see that using Mitchell with a radius of 1.0 will indeed yield a sharper image (sharper than the 3X render resampled in my example, too). However, this comes at a cost - the image will be substantially more noisy (grainy).
IMHO, I don't believe it all comes down to filtering. Like I said in my original post, running the de-noiser on a higher resolution image helps the de-noiser to perform better.
- Greg
The "mother" of this technique (and the person who gave it the same manual oversampling) also doesn't refer to total amount of pixels and talks about downsizing 50%: http://buerobewegt.com/quicktip-rendering-even-faster-in-iray/
If you notice a firefly in an over 8 million pixel image (4K & up) it's most likely several pixels in radius. If you reduce size then surrounding pixels that aren't fireflies get sampled and blurred into the firefly when reducing size of your image. My screen is only 1920x1080 at 27" and sitting about 2' away I see no disceranble pixels (in the p in pixels for example) & no fireflies at all 99% of the time once I get past 300+ iterations (that's different from lacking unconverged lighting details).
A reason that I like to render at double dimensions is for postwork.
When using the clone stamp tool, etc. in Photoshop to correct minor stuff (poke-thru, hair embedding in shoulder, etc.), having 4x the pixels to work with helps a bit in my experience. Sure, you can try to fiddle with such things, but sometimes it's just easier and faster to fix it in post.
Plus, the other reasons af few people above have mentioned make this a no-brainer for me. I routinely downsize from 3840x2160 to 1920x1080 after I've done my Photoshop tweaks. Since I work with everything in Photoshop anyways, changing the image size takes no time at all at that point.
Renders do take longer, and if I'm doing a series of images I'll often render at the 'default' target resolution to reduce render times. The comments above about doing a just couple hundred iterations at double dimensions then downsizing in post, instead of say 1500 or more, intrigue me...
Then he's wrong and he didn't invent downsampling. The whole idea is each step up and down, 1080, 4k, 8k, is a multiplication of the pixels by 4. Therefore you can average 4 pixels into one.
This is a very important part of high quality video production. Film at a higher resolution, the highest res the camera will support generally. This lets downsampling create sharp images while letting the editor do things like reframe the shot or adjust the dynamic range of the image without losing any quality.
Doubling the linear dimensions of an image quadruples the pixel count. The verbiage causes confusion, but that's the relationship.
Change the preset in the General group of Render Settings to something other than Active Viewport.
I'm not a big fan of the denoiser because of the loss of image detail. But I have used it to render scenes then spot-render the areas that look bad, such as the hair where it's very noticeable. I set up my scenes to render fast anyway and they usually don't take longer than 30 minutes at most.
I find it humourous that we have ever-increasing detailed image maps and normal/dispalcement maps and HD morphs to improve detail, then we go and stop renders before that detail is apparent and contains artifacts, then try to fix it with the DS or a post-work denoiser. Seems all a bit crazy to me.
I agree with fred9803. If you light the render properly and then let it run to completion, you won't have noise.
I'd recommend the Intel denoiser over the in-render one. Finish your render, run it through the Intel denoiser, then decide how much (or how little) each part of your render needs by blending in the denoised copy via post. The Intel denoiser is also miles and away more intuitive than the in-render one...you'll be shocked at how much detail is retained...most if not all.
Not everyone does that.
Also, it depends... if the shot is framed further back so that entire characters bodies are in view, you don't need HD morphs and very high resolution maps. But if you're doing a closeup, it certainly makes a difference.
I tried redering up in size, it does seem to generally work. I don't do it though because my final renders are at 4K and I generally didn't have enough VRAM to render at 8K. Personally, I also don't use the denoisers for still images. I tried the built in denoiser and the Intel one in the thread, as smart as they're touted to be I find they still smash out detail, especially with skin giving it that a more photoshopped/airbrushed look. I don't really do video but from my limited experience it's a real good timesaver there.
You can always combine the denoized image and the base image into a final one. Masking out the bits you don't want or changing the opacity to make it less severe.
True, but then IMO I'm simply spending too much time with postwork. I don't really care for doing postwork all that much.
I went the "throw more horsepower at it so renders don't take as long" route.
If render time is an issue, it may make sense to break the render into layers and composite using the Canvas feature. This also gives more control over scene lighting. Sickleyield has an excellent tutorial on this:
More resolution in your render seems to help the renderer decide how each pixel should look. IME, the result is simply better with a larger render that is downsized. Of course, if you're at 4K to begin with, it's probably fine.
If you must denoise, you might try a dedicated photo denoiser. The noise from a noisy render is a lot like the noise from a lowlight photo. Of course, that may obliterate some detail.
The downsizing technique is most effective against fireflies. It won't help much if your render looks like a noise storm.