CLOSED!Face transfer Skin does a lot more than geometry

alex86firealex86fire Posts: 1,130
edited April 2020 in New Users

I noticed that the created skin make helps the model look a lot like what you would want but once you remove the skin, or add a new one, he looks nothing like it.

I will add a few renders of 8 generated models for same person with different skins (not the generated one) and I am curious if anyone recognises the celebrity that was the basis for generation.

First render is no skin

Second render is Gen 8 Male skin.

Third render is Burchard skin.

Forth render is Dante skin.

Does anyone recognise the celebrity? I will post the ones with the generated skin later, if noone recognises (I would never guess with these skins).

 

Celebrity no skin.png
1280 x 720 - 672K
Celebrity G8M skin.png
1280 x 720 - 840K
Celebrity Burchard skin.png
1280 x 720 - 806K
Celebrity Dante skin.png
1280 x 720 - 863K
Post edited by alex86fire on

Comments

  • SixDsSixDs Posts: 2,384
    edited December 2019

    This is common for all the utilities available for quickly creating look-a-likes from photos. All such utilies rely heavily on the textures created from the photos to achieve likenesses. Although the base model's mesh will be adjusted so that the main facial features fit the proportions of the photos (i.e. size and position of eyes, nose, mouth, etc), the subtle things that give the individual recognizable characteristics exist only in the texture, not the mesh. That is why the likeness is lost when you change to a different texture. What such utilities are doing, more than anything else, is creating a 3D version of the 2D images you started with - they are just 3D photographs essentially. By contrast, characters that you may purchase (or make) are based, first and foremost, on morphing the mesh, then adding an appropriate texture. As a consequence, such character morphs will still retain the features of the character regardless of the texture used, although changing textures can influence the "look" of the character. The only way that the utilities for quick look-a-like creation could produce results, insofar as the mesh is concerned, similar to characters produced using morph sliders or a modelling program, would be if far more features, particularly the many subtle ones, were identified and used to modify the mesh during the creation process. Alas, they are not designed to do that, since that would defeat the quick-and-easy character creation feature that is the main selling point of such utilities.

    You can, of course, take the results from such utilities and manually tweak or add features that make the mesh more closely resemble the intended character if you wish.

    Post edited by SixDs on
  • alex86firealex86fire Posts: 1,130
    SixDs said:

    This is common for all the utilities available for quickly creating look-a-likes from photos. All such utilies rely heavily on the textures created from the photos to achieve likenesses. Although the base model's mesh will be adjusted so that the main facial features fit the proportions of the photos (i.e. size and position of eyes, nose, mouth, etc), the subtle things that give the individual recognizable characteristics exist only in the texture, not the mesh. That is why the likeness is lost when you change to a different texture. What such utilities are doing, more than anything else, is creating a 3D version of the 2D images you started with - they are just 3D photographs essentially. By contrast, characters that you may purchase (or make) are based, first and foremost, on morphing the mesh, then adding an appropriate texture. As a consequence, such character morphs will still retain the features of the character regardless of the texture used, although changing textures can influence the "look" of the character. The only way that the utilities for quick look-a-like creation could produce results, insofar as the mesh is concerned, similar to characters produced using morph sliders or a modelling program, would be if far more features, particularly the many subtle ones, were identified and used to modify the mesh during the creation process. Alas, they are not designed to do that, since that would defeat the quick-and-easy character creation feature that is the main selling point of such utilities.

    You can, of course, take the results from such utilities and manually tweak or add features that make the mesh more closely resemble the intended character if you wish.

    I just saw my post now and realised that the images I had added are no longer there somehow and this post makes a lot less sense without the images :).

    You are right about the result of such quick programs.

    I have been reading lately on face recognition and 3d imaging though and it is very advanced. They could be making a lot more advanced software but I imagine it would have a different price as well.

    I have tried to do exactly, what you are saying, taking the results and tweaking them with morphs and even Blender and the result is still so far off what it should be. The side of the face at least is corrected, while with the face generation it is almost always flat and not human looking at all.

    I think there are 2 improvements that would not be hard to add and would improve the result a lot:

    1. Add the possibility to add a side picture as well. The side picture would be mostly for geometry, but it would do so much for the face generation. Optionally, maybe they could allow you to tick off use texture from side picture as well and combine both, if you have a side picture from the same set like the front one, like the ones 3dsk is providing for example.

    2. Let you place some main points on the face. With a perfect image, the result is decent, but if the person has hair on the side of his/her face a lot of the time, you get some ugly faces, not real looking at all. There are also some features that are not recognized perfectly. instead of working on improving the algorithm, which could prove to be very hard, they could just allow the users to place the main points of the face on the photo, maybe even some more, if you want very precise detail. Those points could be like corner of lips, middle of lips, a few on the chin and jaw for defining a correct face shape (which is almost never the case), a few on the nose (which also rarely looks similar to the picture). These points would allow for a still fast generation, without too much hassle on the user's part, and without a lot of programming hassle on their part. I mean they already have to recognize those points anyways, why not let the user provide them if they want additional accuracy.

Sign In or Register to comment.