What makes the rendered Daz human images still distinguishable from the photos of real human?
tayloranderson2047
Posts: 116
So far I have to say whenever I saw a picture of Daz figures, I could still tell immediately that is not a photo of a real person, even though it is really close. What subtleties or obstructions are there? Is it just because the skin details are not enough or the lightning or the Iray engine? If so, are there any good examples of Daz rendered images that look really authentic?
Here is a video of Ipad finger painting whose result turns out to be great:
iPad Art - Morgan Freeman Finger Painting - YouTube
Post edited by tayloranderson2047 on
Comments
hair usually first thing, I have been fooled by 3D with digitally painted hair
The sum of all approximation?
Hair, clothing, expressions, skin....there are a bunch of "tells" that you don't even recognize consciously, but your brain knows ;). Even so, they've come a LONG way from the old days of Poser 4 when and where I first started. ;)
I took a closer look at the kyle lambert fingerpainting, and the Morgan Freemen photo is a real world Scott Gries photograph. so there is smole and mirrors and ai involved here.
With 3D, its called the uncanny valley, but the gap is ever closing.
This thread has 65 pages if that helps https://www.daz3d.com/forums/discussion/313401/iray-photorealism#latest
If you hand me a photograph, I can tell immediately that the photograph is not a real person. Why is anyone surprised that people can tell other images are not photographs? Is the assumption that because both photographs and computer screens are experienced as 2d surfaces that our eyes should be confused between the two?
Can we sum this up as a "randomness of details"?
Even if a render looks natural at first glance it will not look realistic if everything is too perfect: the hair is to evenly styled, there's not one yellow spot on the plant in the background and all sofa cushions look identical. You get the idea.
Renders need to be a bit more chaotic,so to speak, before we do the next step to photorealism.
It is clearly a learning proccess. YOU cannot be fooled due to your experience with 3d software and renderings. People who had never anything to do with 3d art, can be fooled easily.
Funnily enough, I was about to mention the same thing.
I can remember my mother watching Clash of Titans (https://en.wikipedia.org/wiki/Clash_of_the_Titans_(1981_film)) and saying how good the graphics were, while I was simply horrified by them. I think you do learn, getting better at observing the un-noticed little things.
Regards,
Richard
Lately, one thing I often have a bit of trouble with is distinguishing whether a picture of say a human started out as a 3D render or as a human but was heavily filtered / photoshopped. Think say Instagram models, some of those filters on pics can result in the loss of skin details and such that make it resemble a 3d rendering.
Usually I can pick up some clues from things like hands and hand posing, lack of shadows or shadows wrong for the lighting, and lack of details in background objects.
my experience of daz users is that they think renders look good/realistic when they really dont. So i would be hesitant to say familiarity with 3d has made them a more discriminating cohort vs the general populace.
the uncanny valley effect is probably more potent to non 3d artists, as 3d artists have become desensitized to it and accustomed to their deformed daz models.
well on a different but related topic, I am much better at spotting AI than most of those I know who won't use it.
I often point it out (specific telltale details ) to the ones who haven't blocked me when they share a link to a lovely image
To begin with, the simple fact is that there's no way that any DAZ figure has anywhere near the resolution or accuracy in details to completely mimic all of the aspects of a real human figure with anything even close to true physical accuracy, as even if such a figure was possible, to be able to render it so would require vastly more computing power and rendering time than any home computer system... or most professional systems... are currently capable of. And that's not even considering all of the calculations needed for perfect hair, perfect clothing, perfect environment, etc. As an example, a feature film image running at 2K requires hundreds, if not thousands, of computers to produce a single frame. For FROZEN, which doesn't even have photorealistic characters, Disney used a bank of 4000 render computers to produce the "let it go" sequece, and even with those resources it took 30 hours per frame to render. If a major studio throwing hundreds of millions of dollars at a project can't even believeably remove Henry Selik's moustache in JUSTICE LEAGUE, or produce a perfect recreation of Peter Cushing in STAR WARS - ROGUE ONE, is it really reasonable to expect be able to do it on a system with a graphics card meant for playing games?
Certainly people get better at disatinguishing what they train with or what they get used to, e.g. by living circumstances (types of snow, telling faces of other people apart, ...). Similarly you learn to detect features, when you try very hard to achieve something specific like photo realism, e.g. because you know the trade-offs, shortcomings, and methods of obfuscation. So i think it's not a huge surprise that a 3d-artist's mom won't notice something they do. On the other hand, concerning valuation, maybe she detects the uncanny, but compares to the trickster technique from 50 years ago ;), so she's quite positive about the progress, which then would be another area of comparison.
"Henry Selik" lol oh dear.
there are many likeness artists who make models basically indistinguishable from real life, just by themselves, using only entry level pc components
Ian spriggs is probably the most notable one:
But there are hundreds of such artists now:
There's nothing from a technical standpoint stopping us from making passable realistic models if we had the time/skill. Obviously we cannot make a human realistic at the atomic or molecular level, but at any macro scale it's obviously possible.
There are people doing realistic animation also. I see them in my feed all the time on twitter, albeit clips are only seconds long.
In terms of Daz Studio, hair is the biggest problem when it comes to rendering. What we have available for hair is far from industry strandard. Of course, one could always do hair in another software then import as static mesh. On that basis,everything is mostly an artistic limitation.
You're talking about animations, which introduce the additional difficulty of movement. A still frame of Peter Cushing from Rogue One would fool a lot more people than seeing it in motion.
Yep. It was early in the morning and my brain misfired on Cavil. Still doesn't change the fact almost everyone noticed that Superman's lips looked wrong, and even though I'm sure that the high end of the rumored cost being over $24 million is a gross overstatement, it very definitely cost millions of dollars.
I suppose it depends upon your definitition of indistinguishable, but the only ones in your attached that would fool me in a photo lineup are Spriggs', and each of his creations takes months as he individually models almost everything, down to the characters individual eyelashes, and then renders in Maya on a Lenovo workstation with four high-end graphic cards using V-Ray 6. Not exactly a hobbyist level system or process. And yet, despite that, they can still drop into the uncanny valley when they're shown in motion or in extreme close-up... look at his GINA for example, which looks great until you look at her armpit, or the front page of his website where the depth of his skins textures don't always hold up at high magnification. But in any case, given that the subject is DAZ Studio & figures, which have to serve as all-purpose tools that have to do a lot of lifting and carrying for a hobbyist user rather than something done to be shown from one specific angle, it's really a total apples to oranges comparison.
The hair is, indeed nowhere near up to par, but given the number of follicles on the head alone (from a base of 90,000 on a redhead up to a upper limit of 150,000 on a blonde) we're once again looking at the limits of consummer technology. That said, I honestly feel that there are worse problems with the DAZ figures/Studio, most notably the lack of self-collisions and an expression system that is overly dependent on bones that don't actually exist in real faces.
lol...
I am not so sure. Ever since elementary school, I have had a knack for recognizing matte work in movies/TV, whether traditional or CGI, no matter how subtle, even though I have no experience in special effects (unless reading, in fourth grade, The Making of Star Trek [TOS], which has photos of the practical matte process counts).
I don't think this is much different. Or maybe it is just some quirk of my brain's wiring. I am not usually bothered by the uncanny valley, although I recognize it in images.
I thought you were poking fun at Henry Cavill's bushy Magnum PI moustache—blending his name with Tom Selleck's—and just misspelled Selleck.
People who aren't familiar with 3D models tend to find them more photorealistic than people who are. That's not a jab at them or anything; when I first started working in Daz I thought a ton of product renders were mindblowingly realistic; they now just look like (very good) Daz renders to me. But "digital models" like Lil Miquela and Shudu really had a lot of people thinking they were actual models until the creators eventually disclosed that they weren't.
"What we have available for hair is far from industry strandard. "
@lilweep I see what you did there. Nicely played, and perhaps too understated, but well-played. I salute you!
Thanks for the great references from Artstation! BTW why do you say Daz hair is far from the industry standard? Do Xgen and current Blender hair system satisfy the industry standard?
i would say so yes. Xgen and Blender's new hair system, not the old Blender particle system which was only slightly better than SBH Editor.
If you are a Daz PA, the SBH feature is probably a little more versatile than it is for normal users. But if you look at how effortless you can make good hair in Xgen and Blender's new hair system from scratch versus anything we are sold on daz store by supposed professionals, it makes me think there is an inherent limitation in SBH.
XGen really looks great, but you need Maya for Xgen, am I right? As I only have Blender, I wonder if there will ever be a XGen plugin for Blender. At least it looks amazing.