Question about Texture Compression Thresholds

in The Commons
If I'm understanding this option correctly (found in Render Settings -> Advanced), the value in the High Threshold determines the maximum size of textures, shrinking anything over this size setting to what is entered here. For example, if you set High Threshold to 1024 and load a 2048x2048 texture, it will be reduced and rendered at 1024x1024.
Could someone explain what the Medium setting is for, and what happens to a texture smaller than this size? If this is set to 1024 and a texture is 512x512, is it upscaled? Better understanding of these settings and how they affect render quality would probably be beneficial to many. Thanks in advance.
Comments
http://blog.irayrender.com/post/54506874080/saving-on-texture-memory
Thanks, but I still don't really know what they're talking about. Why do values need to be entered if there's basically a "medium" and "high" compression, and what should they be set to? Also very difficult to tell any difference between their examples. Like most things Iray, official explanations leave more questions than answers.
Anything over the medium size is subject to medium compression, up to the High size at which the High compression is applied. Normal maps are not compressed at all as that produces noticeable artefacting.
So should the values be ascending in order to keep quality? Like 2048 for Medium and 4096 for High? What if you don't want High compression applied to any textures (like if you're doing a skin close-up)?
I think you want the high value to be bigger than the medium, but I don't know what would happen if they weren't in order. Setting the High value to 9,000 should be enough to stop compression of almost al images - though you may then wish to manually resize some non-critical images.
I am lost really - So does increasing the medium threshold and high threshold tax the GPU more? And when that happens, does the texture look finer? I guess that is what all of us are wondering.
Increasing the values potentially increases the amount of data sent to the GPU (and if it doesn't then there was no point in doing it).
Unless the threshold values are quite small, you are unlikely to notice the difference unless you have a (very) closeup, or pixel peep.
Just for clarity to those reading, these settings do not cause iRay to resize the texture image, merely to compress it. By analogy, you can store a picture in a PNG file, which is lossless, but large. Or you can convert the same image (same width/height) to JPEG format and store it that way. Doing this will make the file size smaller, at the expense of introducing some (usually not noticeable) artifacts in the image. And you can choose higher JPEG compression, which will reduce the file size even more, at the cost of more artifacts.
So, if your texture is below the first threshold, it gets put into GPU memory uncompressed. Sort of like using PNG. If it's between the first and second threshold, medium compression is applied. Sort of like using JPEG with a mild compression level. The texture is still the same physical dimension (width/height) but occupies less GPU RAM than if it had not been compressed. Above the second threshold, a more aggressive compression is used, sort of like using JPEG with more compression/lower quality
The compression algorithm used in GPU memory isn't JPEG (my PNG/JPEG example above was just a conceptual illustration). Instead, it's one that the GPU hardware can efficiently decode on the fly. As fastbike1 said, you're very unlikely to ever notice the effects of it, but it can make a significant difference in the actual amount of GPU memory your scene occupies.
I think I researched this at one point and found that NVidia used the DXT family of algorithms with iRay, but I'm not positive about that. There's some discussion of texture compression algorithms here, for those interested
https://developer.nvidia.com/astc-texture-compression-for-game-assets
Holy...
FINALLY! I get it now.
This thread is a life saver!
I've been troubled with this problem since... Forever!
There should be a warning about this. Or a sticky?
Most of my work has custom/modified textures and this is a very real problem!
So... Can anyone recommend a good standard value (default is obviously really low) or do you adjust
accordingly? Currently I have it at Medium: 2048 - High: 4096.
A thousand thank you's! :D
The default is a good standard value. Depending on your scene, however, the texture compression level can be the difference between rendering on your GPU and dropping to CPU. For most simple scenes I go for 1024 to 2048. There's no benefit to letting VRAM go unused, as long as the scene still fits on a GPU, if that's important to you. If it doesn't, then scaling back is in order. With a single figure and an HDRI, there's no reason not to go nuts, especially if the textures are of high quality. The difference really isn't anything to get too excited about, unless you render in very large (>4k) format, and only in close-up images, and that's why the default is usually fine.