-
-
Notifications
You must be signed in to change notification settings - Fork 148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Augmented faces sample #93
Conversation
This currently supports texture projection onto the face using the AugmentedFaceNode and calling setFaceMeshTexture
# Conflicts: # core/src/main/java/com/google/ar/sceneform/ArSceneView.java # settings.gradle
Hi! |
There was no camera image because the new material for it isn't double-sided. Actually, I don't understand why for the front camera we need to inverse the front face winding. We can correct the fox model if needed. |
Thanks @grassydragon. I suspected the camera shared too, but was not able to confirm it as the sample/project got somehow corrupted - getting some strange errors when trying to run it, so I'm still tackling that.
Rendered incoractly in what way? It might be due to missing bones translation
I'm not sure either. Since we still have old material, we could also use old one for front facing camera. I don't think any depth occlusion is needed in that case. |
Have you tried "invalidate Cache and Restart" or "Sync Project with Gradle"?
The texture is missing when I comment out the mentioned code.
I'll check the OpenGL example to see whether the inversed front face winding is required. |
I don't see that the front face winding is inversed in the ARCore sample. Did you take the OBJ models from there and converted them to glTF? |
Basically yes, I took the fbx from here and converted it to glb (with textuers) using blender. This mesh contains the nose piece and the both ear pieces. But I've noticed now, in the ARCore Java sample that then have them each a separate obj's. That would allow use to better match the face regions, but it the approach seems to go against this guide on creating assets for augmented faces Also, I've noticed that occlusion is broken if reverse winding is disabled. Usually if you look up, the "face" mesh would occlude the ears mesh for example |
Ok... I have made a big cleanup and based everything on a single renderable. The face model is now considered as one single glTF which comes in place of the The standard case should consider applying a different renderable to the Face model node. Speaking of nodes, could you change the FaceNode behavior and make it more like an And now comes the fun part: PLEASE DON'T LAUGHT AT ME |
Can you also have a look at the fox face .blend. |
I know how to add an armature, link vertices to bones and assign weights but I need to understand how this applies to the Augmented Faces. |
PreviouslyWorks with 2 models :
WARNING: The TODO
canonical_face.blend Occlusion and others 3D objects are handled directly by the exported gltf.
My question here is do you think that this part can be replaced with getting the AugmentedFace positions and rotation of the 3 bones (AugmentedFace.getRegionPose(RegionType.FOREHEAD_LEFT) and applying it to the gltf model bones keeping the same behavior as previously ? |
I think we still need this part because the 3D objects are occluded with the face. I understand how to make the model now. We need to attach the 3D objects to the corresponding bones and update their positions and rotations at runtime. In the model only positions and rotations of the 3D objects relative to the bones matter since they will be completely overwritten at runtime. Since the world poses will be set the location of the root bone doesn't matter but it can be located here AugmentedFace.getCenterPose. |
This currently supports texture projection onto the face using the AugmentedFaceNode and calling setFaceMeshTexture
# Conflicts: # core/src/main/java/com/google/ar/sceneform/rendering/Material.java
I merged the master branch to make sure to have the latest material fixes. Please pull. |
Sorry guys, I was at work so I was not able to follow the discussion. I'm not sure I fully understand this part
I also don't fully understand why everything needs to be part of the same model (fox_face)? I think the texture should be applied to the ARCore generated mesh and not the model mesh. This is also according to the documentation. From what I understand, ARCore generates a mesh the confirms to the users face, where this is one does not seem to (at least not now - unless For the bones. It's possible to get the bones entities using something like |
@ThomasGorisse I've played around a bit, trying to get the skeleton stuff working, but was unsucessfull
Where I used this for getting the entities from the mesh, which worked fine, but i failed at actually changing the transform. If we do go down the route of now using the standard mesh (that google has in the documention), maybe it might be easier if we split the mesh into 3 pieces for each face region type? |
Do you get the rights Dedicated Manager Instance (= Maybe @prideout could help us on the bones positioning with gltfio. @fvito Can you add an ask about how to manipulate bones on the Filament repo and link it to here?
The question is: Easier for us or for the users. |
@ThomasGorisse very good point, I opened a discussion in the filament repository. I very helpful example has been provided and I was able to also make it work in our case. Now just the tricky math part is left to get the bones to work 😃 I will also push the bones parsing, if someone else want to play with it |
I'm sorry to ask it one more time but can someone help me understand why do we need a second Renderable for occluding the real face? |
I think yes.
I think that since textures can be separate files when using the text glTF format it is better not to include the texture in the 3D model. The user can use any way he or she likes to paint the texture (for example, in GIMP using the base texture from the ARCore manual as a reference or in Blender using the base face mesh and applying the texture to it). |
+1 for the "easy" image texture modification with a graphical image editor Can you check that everything is OK on the current PR branch so that we can merge it? If you need help with the blender bones export, I have worked on it a little for another PR: google/filament#4236 |
Yes, I will test it today. |
So we need to:
I will look at the code and try to do this in the following days. |
Thanks a lot @grassydragon. I don't have time for helping right now since I'm currently working hard on the repository visibilty from outside. |
Hi! |
Cool. |
screen-20210717-133008.2.mp4 |
Hi! |
Hi! |
@ThomasGorisse @fvito |
That is because we either set |
First of all thanks for taking care of this augmented face subject. I didn't answer before because I was on the Environment Lighting subject. So, here are my answers:
You can just put the default blender material and add your the fox image texture node.
What is sure is that backface culling must not have to be defined inside the model since we must keep in mind that augmented face is not linked to the front or rear camera usage. We should be able to use it in both case as we must be able to use the front camera for other usage than augmented faces. By applying to the material, do you mean calling this: It will mean that we apply a new material the model since it's only available from the Builder and it means that we will have to apply it to every added model in case of front camera?
I'm far away from being as good as you on the framgent shader part but did you had a look at the AR Core augmented sample which use this: I will have more time to help you on this when Environment Lights PR will be merged but we have a lot of demands on the augmented face. Thanks |
I actually meant adding But this isn't a good solution and won't be needed after we return these lines: https://github.com/ThomasGorisse/sceneform-android-sdk/blob/e3a594b3458d36b32e1d37e74bba8d0902c5ea8f/core/src/main/java/com/google/ar/sceneform/ArSceneView.java#L171 If you don't mind I will make the camera material double-sided (so we can avoid the missing camera image) and push the fully working sample. |
I totally trust you on this part. |
# Conflicts: # core/src/main/java/com/google/ar/sceneform/ArSceneView.java # core/src/main/java/com/google/ar/sceneform/rendering/PlaneRenderer.java # samples/augmented-images/src/main/java/com/google/ar/sceneform/samples/augmentedimages/MainActivity.java # samples/image-texture/src/main/java/com/google/ar/sceneform/samples/imagetexture/MainActivity.java # settings.gradle # ux/src/main/java/com/google/ar/sceneform/ux/BaseArFragment.java
@@ -157,6 +157,12 @@ public void setTexture(String name, Texture texture) { | |||
} | |||
} | |||
|
|||
public void setBaseColorTexture(Texture texture) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we still need this method?
Frame frame = arFragment.getArSceneView().getArFrame(); | ||
|
||
// Get a list of AugmentedFace which are updated on this frame. | ||
Collection<AugmentedFace> augmentedFaces = frame.getUpdatedTrackables(AugmentedFace.class); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This variant seems fine to me.
Hello! |
This PR ports the SceneForm 1.15 AugmentedFaces sample to this updated sceneform library.
Currently, the supported features are
1.) Augmented Face Texture. By providing a texture made made from the ARCore template, the texture can be projected onto the users face
2.) Augmented Face Regions Mesh. With this, you can provide a mesh that is based on the ARCore sample mesh and augmented users faces.
Currently there are no bones support yet, as I've checked the official sample and my one and they both behaved the same, so I'm not entierly sure if we need them. We also do not have Skeleton support, so getting them to work might be tricky.
I'm facing 1 issue though. After updating to the latests master, the camera rendering stopped working for some reason
What I've tried so far
1.) Disabling depth although, I then saw its disabled by default
2.) Use the newer CameraConfig as the old FeatureSet seems to be deprecated, but this ended up in an exception in Session
Any help is much appriciated.