Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
That's a good idea .. For OpenGL it should be enough to convert everything to the default daz shader then set ambient color to white and strenght 100%. And yes, it is a pita not having the albedo export, even if I guess most fine details are from the normal buffer.
Because of nature of the denoiser trying to specifically spot ray-tracing noise, it generally won't work at all on an image that's had any other editing (including resizing, and your attempt to screenshot it has left it about 10% smaller than the file that was uploaded, even if we assume the original image wasn't resized at all), so that's not particularly conclusive.
That said, so far I am generally more impressed by the Intel denoiser's ability to discern between detail and noise than I am the Nvidia version.
I'll stress that the results are far from perfect, given my notes about all the effects that OpenGL isn't trying to render at all - the test image I used certainly ended up with some odd artefacting because of the mismatch in some areas. However, it does show a lot of promise for what could be achieved with proper albedo data.
That's because the girl on the left has 98% convergence so there's almost no noise to process at all. It would be hilarious for a denoiser to blur even where there's no noise.
I am not sure if this LPE applies as I was just looking up what albedo even is but I wanted to share in case it is possible this helps at all:
cycles vs intel denoiser (with intel using the beauty canvas only)
To better show what I mean here's a proof of concept of a simple cube rendered in blender with 8x steps. The first image is the original noised rendering, that would be the beauty canvas in iray. The second image is denoised with cycles. The third image is denoised with intel using only the original rendering as source. You can clearly see that cycles doesn't lose details nor textures, it simply can't because it's using the albedo and normal buffers. On the other side intel is just trying to guess what's noise and what's data and the result is far from decent.
original image (beauty canvas)
cycles denoiser (using the albedo and normal buffers)
intel denoiser (using the beauty canvas only)
While I mainly use cycles for rendering I also like to compare things for fun .. Thank you so much, going to do some tests ..
EDIT. I had no luck with that lpe, it just shows a black exr to me. Also, even if I don't know how lpes work, I guess that's not correct because it's too simple. Again, below is an extract from the intel documentation.
https://openimagedenoise.github.io/documentation.html
I'm updating the DragNDrop app to support albedo and normal maps. Does anyone know which formats these normally are in (jpg, png, etc.), and if they need to be in the same format as the render?
Canvasses support all the features of LPEs, as described here https://raytracing-docs.nvidia.com/iray/manual/index.html#reference#light-path-expressions - I don't know if that is how Albedo AOV is built, if it is it should be doable in DS.
As I've heard (and seems to be backed up by testing above), attempts to get Daz to supply an Albedo canvas come out black for some reason.
(EDIT: I should add that I'm away from home at the moment, so can't currently test it myself).
They would normally be EXRs if you're storing them at the same time, but I think my previous testing shows that you can mix and match formats.
(However, that said, anyone using JPG prior to denoising deserves a slap).
Well, it's possible that there is a bug somewhere - but it's quite possible that the bug is in the LPE being fed to iray, rather than on the Daz Studio or Iray side. This is the first I've heard of Albedo AOV, do you have a link to previous discussions - ideally giving the LPE used?
Umh...the difference is noticeable, but this stuff is too complex for me!
I wish one day Daz will include this method for noobs like me too xD
OK, thanks, I'll include them all then.
Yeah, the Intel denoiser seems better from my experience. I've been using it exclusively. It also has a huge advantage in that it's non-destructive, so you keep the original render and can "pain back in" any lost details using layers in PS.
I use .png.
Since I'm always curious, what's the difference? Is it the fact that JPG is more compressed?
I'll have to get back to you on that one, as I'm having trouble tracking it down again (wrong search terms or something), so I'll have to check my desktop's browser history when I get back home.
PNG uses lossless compression, using more storage space to ensure perfect reproduction of the image.
JPG is lossy compression that throws away fine image detail in the name of reducing file size. High quality JPEGs are usually okay for most real life photos, because camera images are naturally at least a little noisy and you're not worried if it doesn't reproduce the noise in the image exactly. (For an analogy, JPEGs are a bit like translated text. It won't be perfect - perhaps the adjectives have slightly different connotations - but you'll probably get the general meaning).
If you're doing editing on an image, and eventually need to compromise on file size (upload limits/download speed/whatever), you only really want to suffer that loss of detail once, and only at the final stage, in order to be able to best edit the image.
This is particularly important where these denoisers are concerned, because they're trying to spot the difference between ray-tracing noise and other detail, so smudging and smearing it just makes it harder and more likely it will guess wrong in the "noise or detail" game.
~~~~~
It should be said that even PNG is a compromise, as it uses 8 or 16 bit integer bit depth (8 being more common), meaning it often has to lose contrast in very bright or dark areas in order to have more contrast for the majority of the image - a concept called "clipping". This can become important when colour correcting, as the contrast outside the image's range is lost and cannot be recovered - if an image is over or under exposed, the damage is done.
Formats like EXR, HDR and PFM instead store floating point data that can store a huge range of numeric data with good precision at any order of magnitude, and are excellent for heavy post processing, but they have huge file sizes* and cannot actually be properly displayed on any monitor without range compression (because while the format can accurately store the brightness of the sun at high noon, no monitor can actually accurately replicate that).
* Frequently 20-100x the size of a JPEG file of the same resolution, so not a sensible format unless you actually need perfect contrast across huge dynamic range.
Anyway, now I'm rambling.
Thank you for the detailed explanation Matt!! :D
I often publish JPG renders...mostly because when I save from PSD in Photoshop it's easier to spotter than PNG.
Do you think I would see a noticeable difference in the final render if I used PNG instead of JPG?
It depends on how large your render is. To be perfectly honest, you will most likely not see any real difference unless you render massive images and then examine those images under a microscope. You can always test this for yourself by rendering the same scene and saving it as a png and then as a jpg. But in general, I always render out as png because the simple fact that png can be done with transparency. That means you can render just a character or two on a transparent background and photoshop that image much more easily into another. For people who make book covers and that sort of thing, png is a must.
Just want to add that PNG has 10 compression levels (0 - 9), which you may be confronted with when saving PNG files in certain programs (e.g. IrfanView). People often think it has something to do with picture quality which is not the case, the quality is the same with any compression level. Higher compression just means a smaller file size, the price is a slower read/write time but on todays computers that hardly noticable except perhaps for very large pictures. So normally you can use level 9 for smallest file size. Never use 0, that means no compression (huge file size).
I think it's actually mostly write time, as it tries multiple different compression methods to see which works best for that specific image. Any difference on write EDIT: read time is, as far as I know, entirely negligible.
Yes, I render in .png, but I never publish something without editing it in Photoshop at least a little bit!
And from PS I save in .jpg.
Then I'll continue doing so, thanks for the explanation! :D
.
AFAIK the DAZ forum is not reducing the quality of PNG files you upload, while it does with JPG through recompression to save space, so you may want to use PNG if you don't want the quality of the renders you upload to be reduced.
I upload on DeviantArt!
Then I'll do some tests, thank you! :D
That is helpful information thanks! I always wondered how that affected the image. Now I can stop guessing!
DragNDrop for the denoisers has been updated, now with support for Albedo and Normal AOVs, as well as other improvements. More info and download link here:
https://taosoft.dk/software/freeware/dnden/
Let me know if there are any problems.
Declan (who wrote the denoiser scripts) also asked me to share some info, to clear up some of the apparent confusion around the AOVs:
"The denoisers can take up to two feature buffers, the normal and the albedo AOV. The normal AOV will help preserve the normals on the geo which is especially important for the finer details such as bump mapping. The albedo buffer will help preserve texture details. The albedo AOV should be a weighted sum of albedo layers with a result between [0, 1]. The Arnold renderer has a builtin LPE called denoise_albedo which does this and is worth checking out. For further information on this check out the oidn documentation on the subject https://github.com/OpenImageDenoise/oidn#rt "
You've set a great and clear page there, from now on I'll link it instead of this thread, that has become a little bit scaringly long! xD
I still haven't figured out this albedo and AVO stuff...maybe it's for another day, when I'll become better! xD
There are some tutorials on youtube for how to create an albedo map from a diffuse map, in Photoshop. Png albedos made in Photoshop CS6 gives an sRGB error message in the Nvidia denoiser however. It should be possible to fix by editing the png files but I'm not quite sure how yet.
Thank you for the update Taoz!!! Your DnD UI makes life so much easier for such a simple task. (cause I dont like having to type stuff out in a command line if I dont have to)
You're welcome!
Okay I have this thing downloaded and placed in the folder of my choice. I have the paths set up now what? drag images from where to where? Is the image suppose to come from where my image is as in soruce file? I'm confused as to where to drop the image. DND? the Denoiser.exe?
Just use the mouse to drag the saved render(s) from Explorer or whatever file manager you use onto the DnD app.