Rendering objects that are far from the camera

Hi. I came today for advice, but I don't really know if something like this is possible. I wouldn't call it a problem, rather I'm just interested in "improving" my rendering.

Is there a specific rendering setting I should focus on if I'm rendering objects / characters / locations further away from the camera? It is common that the further an object is from the camera, the less details are visible. But I would like to know if this thing can be improved somehow :)

Whoever comes and advises - thank you! Have a nice day

Comments

  • bytescapesbytescapes Posts: 1,841

    You might want to look at the Depth of Field setting in the camera. There's a discussion of depth of field here, with a link to a useful tutorial.

    With the standard DAZ rendering camera, objects are in sharp focus whether they're near or far from the camera. In real-world photography, objects may be sharp ('in focus') or blurry ('out of focus') depending on their distance, on the characteristics of the lens used, and the point of focus of the lens. You've probably seen photographs where, for example, the subject is in sharp focus but the background behind is blurred. Using the Depth of Field and Focal Point controls in DAZ Studio lets you reproduce this effect, and can be an important cue for adding a sense of distance and scale to your pictures (or, as in photography, for directing the viewer's attention to important parts of the scene).

  • There is a sort-of "limit" to rendering in a "virtual space". Daz/Iray, like other programs, have a certain limit which they internally allow some things to happen. I think you may be hitting this limit. However, if that is not the case, none of this first suggestion will help you.

    If you suspect that you have hit an outer-limit bound, you can try a few things.

    1: Floating-point precision bounds... At some distance from "The camera" and/or "The origin 0,0,0" the floating-point precision begins to fail. There just isn't enough "decimal-places" to correctly represent the objects, textures, lighting, etc... The simple solution for rendering programs is to just "try your best". Daz, I assume for openGL, has a less friendly limit. Also, "the best", seems to include dropping some shaders in the process. (Or possibly the shaders are just reaching limits too. Too many 0.000000000001's where it expects 0.00000000000000000000001 or the values just get clipped to 0.0, which is equal to "do nothing" or "off" in most shaders.

    For this issue, in Daz, you can simply try placing "the visible components" as "near the center of the starting area" as possible. An easy task to do if you just group the whole scene within a single group, then you can test for the virtual-spaces world by dropping a default primitive, which normally drops right on 0,0,0. Wherever that oject is, is where you want your most detailed "distant objects" to be. Normally, we tend to "build outward" in one direction, leaving the whole composition bias in one vector-space, instead of occupying the closer +/- vector-space on X,Y,Z near the origin. (NOTE: This is also why most programs save objects, normalized, with the 0,0,0 in the dead-center of the object, but they use an "origin" modifier to "set" a user-origin, which tends to naturally fall on one corner of a cube-like shape. It helps retain object details better.

    The other part of that issue could just be the camera-distance from the "objects in question". The more distant you make the camera, the same issue with floating-points comes into play, from a "view-port" not a "world-space". Unless you NEED a deeply linear shot from the camera, just move it closer and adjust the "Field of view" to be larger. (Because the closer you move the camera to the object, the bigger the object obviously looks. Altering the "field of view", so it is wider, will make the scene smaller. More fits in the view.) On a reverse situation, being too close with a camera can also sometimes, in extremes, have an inverse floating-point issue. It can make things disappear, because they are so large that the center or "majority" of a LONG or LARGE triangle, or surface has fallen out of natural-view. Even though it should be in view. (You can see this with a primitive floor composed of only 2-triangles making 1 square. The floor can disappear if you zoom-in or move the camera so the majority of the floor is out of view, it just doesn't try and draw it al all. You need to make a floor with more divisions to fight that issue.)

    When it comes to sub-divisions, they also tend to "stop rendering" at certain distances. At some point, either Daz or IRAY is evaluating the scene and determines that a subdivision surface falls below a "pixel-level". It MAY not subdivide a surface that fits into 2 or 3 pixels, into a surface that would normally sub-divide into a cube of 3x3 or 9 sub-divided surfaces. (A circle may turn into a stop-sign, far from the camera. Even though the sub-divisions may have altered 2-3 pixel-spaces to make it look round, not like an octagon without divisions. Usually only seen on profiles of objects, not on fore-facing surfaces.)

    2: There is nothing ABOVE wrong... You just don't have enough "stuff" or "fake-reality" to make it seem "realistic". Lack of fog-depth, lack of "depth-blur", surrounding objects that contribute to the composition, comparable "backgrounds" to fill the virtual-world behind the objects...

    There is a few things you can do to simulate "atmospheric fog". The most common thing to do is output a "depth-map" of the scene. You will have to use a paint-program and a function designed to "normalize" the output of the file. It will rarely ever be correct, or desired, "out of the bag". It should be making a nice "floating-point" grey-scale image that extends well beyond the visible or "digitally portrayable" range. You can use this to adjust the limits to be within a visible range and then apply that to the final rendering to simulate depth-volume. (Depending on the lighting, depth-volume has two possible effects. One can be simulated without doing this at all. You just need a singular light that is "dim" but powerful enough to reach the ends of your rendering world. That is used for "night-time" or "darkness depth" simulation. Things closer are brighter and tings further away should "fade into the blackness of the night". However, for a lit-up scene, you want the reverse. The further the things are from the camera, the more they change into "whiteness", or whatever tinted whiteness the sun offers at times. Near sunset, the whiteness is yellow, if the sun is near it, blue-black if the sun is away and the "night" is creeping on the horizon. Using the grey-scale data for "depth", you can create a blended layer for "brighness", to use the grey to increase the brightness of items that were further from the camera. EG, Deeper in the view.)

    For blur, associated with "depth", you can, with rather horrible results, use IRAY's DOF by turning that option on in the camera. You will have to play with the focal-point and focal-limits, to get a good "fake" DOF. (It will be horribly untrue and axaggerated and look fake, no matter what you do. It's just too complex to "calculate" a true DOF blur. You would need billions of renderings and convergence-math that would kill a super-computer. There is another way, using the same depth-map. Though, it will be less realistic, but you will get full control of it, post-render. I do not recall the name of the filter, but it uses the values in a depth-map to determine which pixels need to be blurred more, based on your settings.)

    I know you said "more detail", but the reason it looks fake, is because it HAS more detail than it should. Adding more detail would make it look even more fake, not less fake. That is the "thing" you "just can't put your thumb on"... That thing which you know, something is wrong, but you just can't tell what.

    However, there is ONE situation where that is UNTRUE. If you are using a special lens, and taking a "focused" shot of something far away. Then, the unreal element IS the lack of details in that item, but also the "extra details of foreground items". If the distance is your "focus", then everything closer to the camera should be getting more blurry and be loosing detail. Again, unless you are using DOF in the camera, that is why it "feels odd", because it is ALL IN FOCUS. Even the best infinite focus lens would not have "everything in focus", just a super-wide focal-frame-depth. Even Hubble needs glasses! That's the worlds largest (second largest now) infinite focus camera.

     

    I hope something here helps you.

    Worst case, you need a few renderings that you don't change the camera, but change the deeper scenery, then stack all the transparent renderings as layers on one another for a "final detailed composition". You can offset the camera position and do all sorts of funky stuff to spot-render things with more detail that can later "stitch" into one super-large rendering which you can then, later, reduce to a smaller size image with "more details trapped in pixels".

  • Sorry for the late reply but I've been pretty busy with work, I guess it's okay :)

    @bytescapes
    DOF is a function without which I cannot imagine my rendering. Even an ordinary scene can be made amazing if you use DOF, so I quite understand this technique from real life, but I appreciate your effort and help :) Thank you!

    @JD_Mortal - Very long answer, it took me a while to read it. Well, you have presented a lot of useful information for which I am thankful. :)

Sign In or Register to comment.