Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Augmented faces sample #93

Merged
merged 18 commits into from
Sep 25, 2021
Merged

Augmented faces sample #93

merged 18 commits into from
Sep 25, 2021

Conversation

fvito
Copy link
Contributor

@fvito fvito commented Jun 19, 2021

This PR ports the SceneForm 1.15 AugmentedFaces sample to this updated sceneform library.

Currently, the supported features are
1.) Augmented Face Texture. By providing a texture made made from the ARCore template, the texture can be projected onto the users face
2.) Augmented Face Regions Mesh. With this, you can provide a mesh that is based on the ARCore sample mesh and augmented users faces.

Currently there are no bones support yet, as I've checked the official sample and my one and they both behaved the same, so I'm not entierly sure if we need them. We also do not have Skeleton support, so getting them to work might be tricky.

I'm facing 1 issue though. After updating to the latests master, the camera rendering stopped working for some reason
What I've tried so far
1.) Disabling depth although, I then saw its disabled by default
2.) Use the newer CameraConfig as the old FeatureSet seems to be deprecated, but this ended up in an exception in Session

Any help is much appriciated.

fvito added 4 commits June 16, 2021 18:39
This currently supports texture projection onto the face using the AugmentedFaceNode and calling setFaceMeshTexture
# Conflicts:
#	core/src/main/java/com/google/ar/sceneform/ArSceneView.java
#	settings.gradle
@grassydragon
Copy link
Contributor

Hi!
If I comment out this condition, the image from the camera appears, however, in this case the fox nose is rendered incorrectly.
https://github.com/ThomasGorisse/sceneform-android-sdk/blob/ae96dce5c7957492166026e010f3762a4e5898b9/core/src/main/java/com/google/ar/sceneform/ArSceneView.java#L162

@grassydragon
Copy link
Contributor

There was no camera image because the new material for it isn't double-sided. Actually, I don't understand why for the front camera we need to inverse the front face winding. We can correct the fox model if needed.

@fvito
Copy link
Contributor Author

fvito commented Jun 23, 2021

Thanks @grassydragon. I suspected the camera shared too, but was not able to confirm it as the sample/project got somehow corrupted - getting some strange errors when trying to run it, so I'm still tackling that.

Hi!
If I comment out this condition, the image from the camera appears, however, in this case the fox nose is rendered incorrectly.
https://github.com/ThomasGorisse/sceneform-android-sdk/blob/ae96dce5c7957492166026e010f3762a4e5898b9/core/src/main/java/com/google/ar/sceneform/ArSceneView.java#L162

Rendered incoractly in what way? It might be due to missing bones translation

There was no camera image because the new material for it isn't double-sided. Actually, I don't understand why for the front
camera we need to inverse the front face winding. We can correct the fox model if needed.

I'm not sure either. Since we still have old material, we could also use old one for front facing camera. I don't think any depth occlusion is needed in that case.

@grassydragon
Copy link
Contributor

getting some strange errors when trying to run it

Have you tried "invalidate Cache and Restart" or "Sync Project with Gradle"?

Rendered incoractly in what way?

The texture is missing when I comment out the mentioned code.

I'm not sure either.

I'll check the OpenGL example to see whether the inversed front face winding is required.

@grassydragon
Copy link
Contributor

I don't see that the front face winding is inversed in the ARCore sample. Did you take the OBJ models from there and converted them to glTF?

@fvito
Copy link
Contributor Author

fvito commented Jun 27, 2021

I don't see that the front face winding is inversed in the ARCore sample. Did you take the OBJ models from there and converted them to glTF?

Basically yes, I took the fbx from here and converted it to glb (with textuers) using blender. This mesh contains the nose piece and the both ear pieces.

But I've noticed now, in the ARCore Java sample that then have them each a separate obj's. That would allow use to better match the face regions, but it the approach seems to go against this guide on creating assets for augmented faces

Also, I've noticed that occlusion is broken if reverse winding is disabled. Usually if you look up, the "face" mesh would occlude the ears mesh for example

@ThomasGorisse
Copy link
Collaborator

Hi,
I'm on it:
image

I already fixed main errors.

For the texture part, I think we don't have to care about it.
With glTF, the texture management is independent from the FaceNode.
It can be included in the glTF model or changed at runtime by accessing the material on the RenderableInstance.

@ThomasGorisse
Copy link
Collaborator

ThomasGorisse commented Jun 27, 2021

Ok...

I have made a big cleanup and based everything on a single renderable.

The face model is now considered as one single glTF which comes in place of the AugmentedFace detected pose.
The runtime texture part is possible but not reliable since, for example, a nose might not have the same mesh depending on what you want to render.

The standard case should consider applying a different renderable to the Face model node.

Speaking of nodes, could you change the FaceNode behavior and make it more like an AnchorNode = Just a container for another node. This way, you will not have to override every renderable functions in FaceNode to apply it to the internal region child node.

And now comes the fun part:
Link the face meshs to the predefined bones and possibly make some bones modifications at runtime.
I haven't work on blender Armatures yet. If you have some knowledge about it...

PLEASE DON'T LAUGHT AT ME

Screenshot_20210628-015600

Screenshot_20210628-015705

@ThomasGorisse
Copy link
Collaborator

ThomasGorisse commented Jun 28, 2021

Can you also have a look at the fox face .blend.
I made it very quick and didn't parented the ears and nose to the bones, just placed it approximately.
I also reset the materials to make tests but there is some work to do to have a standard face positions and materials 3D model sample.
There also might have issues with the defined the shadowing.

@grassydragon
Copy link
Contributor

I haven't work on blender Armatures yet. If you have some knowledge about it...

I know how to add an armature, link vertices to bones and assign weights but I need to understand how this applies to the Augmented Faces.

@ThomasGorisse
Copy link
Collaborator

Previously

Works with 2 models :

WARNING: The y axis 180° rotation should/will be removed on the .blend, .glb and FaceNode class

TODO

  • Make everything with a single gltf model

canonical_face.blend

image

Occlusion and others 3D objects are handled directly by the exported gltf.

  • Apply the poses (position/rotation/scale?) informations returned by the ARCore AugmentedFace

My question here is do you think that this part

https://github.com/ThomasGorisse/sceneform-android-sdk/blob/b21838d3aebd9caa45b7b9919a70289633b3c4ec/ux/src/main/java/com/google/ar/sceneform/ux/AugmentedFaceNode.java#L307-L371

can be replaced with getting the AugmentedFace positions and rotation of the 3 bones (AugmentedFace.getRegionPose(RegionType.FOREHEAD_LEFT) and applying it to the gltf model bones keeping the same behavior as previously ?

@grassydragon
Copy link
Contributor

My question here is do you think that this part

I think we still need this part because the 3D objects are occluded with the face.

I understand how to make the model now. We need to attach the 3D objects to the corresponding bones and update their positions and rotations at runtime. In the model only positions and rotations of the 3D objects relative to the bones matter since they will be completely overwritten at runtime. Since the world poses will be set the location of the root bone doesn't matter but it can be located here AugmentedFace.getCenterPose.

fvito and others added 5 commits June 28, 2021 15:58
This currently supports texture projection onto the face using the AugmentedFaceNode and calling setFaceMeshTexture
# Conflicts:
#	core/src/main/java/com/google/ar/sceneform/rendering/Material.java
@ThomasGorisse
Copy link
Collaborator

I merged the master branch to make sure to have the latest material fixes.

Please pull.

@fvito
Copy link
Contributor Author

fvito commented Jun 28, 2021

Sorry guys, I was at work so I was not able to follow the discussion.

I'm not sure I fully understand this part

Speaking of nodes, could you change the FaceNode behavior and make it more like an AnchorNode = Just a container for ?>another node. This way, you will not have to override every renderable functions in FaceNode to apply it to the internal region >child node.

AugmentedFaceNode already is a container of other nodes? Or do you want it to not be a container of other nodes? I mostly just took the original example.

I also don't fully understand why everything needs to be part of the same model (fox_face)? I think the texture should be applied to the ARCore generated mesh and not the model mesh. This is also according to the documentation. From what I understand, ARCore generates a mesh the confirms to the users face, where this is one does not seem to (at least not now - unless updateFaceMeshVerticesAndTriangles modifies it but i'm not sure how to change just part of the model mesh)
If you want to make a "filter" like feature, just a texture is needed, no need to make a full mesh/model.

For the bones. It's possible to get the bones entities using something like
faceRegionNode.getRenderableInstance().getFilamentAsset().getEntitiesByName("FOREHEAD_RIGHT"); (not sure if that is the correct approch) and then modifiying the entity transform. Other then this approach, I haven't found any other way to get the actuall skeleton data from the model.

@ThomasGorisse ThomasGorisse linked an issue Jun 29, 2021 that may be closed by this pull request
@fvito
Copy link
Contributor Author

fvito commented Jul 1, 2021

@ThomasGorisse I've played around a bit, trying to get the skeleton stuff working, but was unsucessfull

private void extractBonesFromRenderable() {
        if (!faceMeshSkeleton.isEmpty()) {
            faceMeshSkeleton.clear();
        }

        for (RegionType type : RegionType.values()) {
            String boneName = boneNameForRegion(type);
            int entity = faceRegionNode.getRenderableInstance().getFilamentAsset().getFirstEntityByName(boneName);
            if (entity == 0) {
                Log.w(TAG, "Face mesh model is missing bone " + boneName + ". Tracking might not be accurate");
                continue;
            }
            faceMeshSkeleton.put(type, entity);
        }
    }

Where faceMeshSkeleton is just a HashMap<RegionType, Integer>

I used this for getting the entities from the mesh, which worked fine, but i failed at actually changing the transform.
I tried with both
RenderableManager.setBonesAsMatrices(); and TransformManager.setTransform, but none of which seemed to have worked. As a simple test I wanted to transform nose mesh to one of the ear meshes. I'm not even sure how ok it is for node to be accessing engine/transform system.

If we do go down the route of now using the standard mesh (that google has in the documention), maybe it might be easier if we split the mesh into 3 pieces for each face region type?

@ThomasGorisse
Copy link
Collaborator

ThomasGorisse commented Jul 1, 2021

I used this for getting the entities from the mesh, which worked fine, but i failed at actually changing the transform.
I tried with both
RenderableManager.setBonesAsMatrices(); and TransformManager.setTransform, but none of which seemed to have worked. As a simple test I wanted to transform nose mesh to one of the ear meshes. I'm not even sure how ok it is for node to be accessing engine/transform system.

Do you get the rights Dedicated Manager Instance (= EntityInstance of an Entity) before calling the TransformManager and RenderableManager functions?

Maybe @prideout could help us on the bones positioning with gltfio.

@fvito Can you add an ask about how to manipulate bones on the Filament repo and link it to here?

If we do go down the route of now using the standard mesh (that google has in the documention), maybe it might be easier if we split the mesh into 3 pieces for each face region type?

The question is: Easier for us or for the users.
We are not to blame so much since it's also the case in the ARCore Unity Sdk Augmented face example.

@fvito
Copy link
Contributor Author

fvito commented Jul 4, 2021

@ThomasGorisse very good point, I opened a discussion in the filament repository. I very helpful example has been provided and I was able to also make it work in our case. Now just the tricky math part is left to get the bones to work 😃 I will also push the bones parsing, if someone else want to play with it

@ThomasGorisse
Copy link
Collaborator

ThomasGorisse commented Jul 4, 2021

I'm sorry to ask it one more time but can someone help me understand why do we need a second Renderable for occluding the real face?

@grassydragon
Copy link
Contributor

Is that it?

I think yes.

Or you think it will be confusing for him?

I think that since textures can be separate files when using the text glTF format it is better not to include the texture in the 3D model. The user can use any way he or she likes to paint the texture (for example, in GIMP using the base texture from the ARCore manual as a reference or in Blender using the base face mesh and applying the texture to it).

@ThomasGorisse
Copy link
Collaborator

+1 for the "easy" image texture modification with a graphical image editor

Can you check that everything is OK on the current PR branch so that we can merge it?

If you need help with the blender bones export, I have worked on it a little for another PR: google/filament#4236

@grassydragon
Copy link
Contributor

Can you check that everything is OK on the current PR branch so that we can merge it?

Yes, I will test it today.

@grassydragon
Copy link
Contributor

grassydragon commented Jul 5, 2021

So we need to:

  • Create a model with bones.
  • Apply transformations to them.
  • Fix the occlusion using the face mesh so that the black regions don't appear.

I will look at the code and try to do this in the following days.

@ThomasGorisse
Copy link
Collaborator

Thanks a lot @grassydragon.

I don't have time for helping right now since I'm currently working hard on the repository visibilty from outside.

@grassydragon
Copy link
Contributor

Hi!
I created a new model, however, I have problems with positioning the ears and nose using the bones. Since it is only possible to set the local transformation in Filament and the transformation of the root node is updated too I tried to get the inverse transformation of the root bone to calculate the local transformation. However, while the world positions in the ARCore pose and after applying the transformation were similar the fox nose covered the whole screen for some reason.
I also added a red sphere to show that the nose tip position returned by ARCore is correct.
The current code applies the world transformations to the bones which isn't correct but it still produces an effect that I can't explain.
Models.zip

@ThomasGorisse
Copy link
Collaborator

Cool.
Can you make a quick Screenrecord?

@grassydragon
Copy link
Contributor

screen-20210717-133008.2.mp4

@grassydragon
Copy link
Contributor

Hi!
I have found out that there are some problems with units and rotation when using bones. I will try to correct them in the model.

@grassydragon
Copy link
Contributor

Hi!
I have corrected the scale and rotation of the bones. Now it is only required to finish face texture rendering and object occlusion. Here is the updated model:
Model.zip
Should I adjust the material parameters?

@grassydragon
Copy link
Contributor

Hi!
If I comment out this condition, the image from the camera appears, however, in this case the fox nose is rendered incorrectly.

https://github.com/ThomasGorisse/sceneform-android-sdk/blob/ae96dce5c7957492166026e010f3762a4e5898b9/core/src/main/java/com/google/ar/sceneform/ArSceneView.java#L162

@ThomasGorisse @fvito
Hi!
I understood why this condition is required. When the scene is mirrored the front face winding is inverted. It is written in the first item of the list:
https://developers.google.com/ar/reference/java/com/google/ar/core/Session.Feature#public-static-final-session.feature-front_camera
Can we make the new camera material double-sided so the face mesh can rendered without changing culling in its materials?
Currently, I don't understand why the flip doesn't affect the camera image.

@grassydragon
Copy link
Contributor

Currently, I don't understand why the flip doesn't affect the camera image.

That is because we either set vertexDomain to device or use getWorldFromClipMatrix function.

@ThomasGorisse
Copy link
Collaborator

ThomasGorisse commented Sep 16, 2021

Hi @grassydragon

First of all thanks for taking care of this augmented face subject. I didn't answer before because I was on the Environment Lighting subject.

So, here are my answers:

I have corrected the scale and rotation of the bones. Now it is only required to finish face texture rendering and object occlusion. Here is the updated model:
Model.zip
From what I see the bones seems well placed and rotated or your attached file.
Should I adjust the material parameters?

You can just put the default blender material and add your the fox image texture node.
BTW, the textures are not included in the .blend but linked externally as files, I don't know if it's possible/better to include them in the .blend file.

Can we make the new camera material double-sided so the face mesh can rendered without changing culling in its materials?
Currently, I don't understand why the flip doesn't affect the camera image.

What is sure is that backface culling must not have to be defined inside the model since we must keep in mind that augmented face is not linked to the front or rear camera usage. We should be able to use it in both case as we must be able to use the front camera for other usage than augmented faces.

By applying to the material, do you mean calling this:
https://github.com/google/filament/blob/ca3b84197441e351ef490d61cbae14ef661a1bbe/android/filament-android/src/main/java/com/google/android/filament/Material.java#L190-L209

https://github.com/google/filament/blob/bdbc85773cf5c4d24c1c97469fd5f8f2343e2deb/android/filamat-android/src/main/java/com/google/android/filament/filamat/MaterialBuilder.java#L321-L331

It will mean that we apply a new material the model since it's only available from the Builder and it means that we will have to apply it to every added model in case of front camera?

That is because we either set vertexDomain to device or use getWorldFromClipMatrix function.

I'm far away from being as good as you on the framgent shader part but did you had a look at the AR Core augmented sample which use this:

https://github.com/google-ar/arcore-android-sdk/blob/e12a49bbfc24cb34f365a42e59a210a04c9942c3/samples/augmented_faces_java/app/src/main/java/com/google/ar/core/examples/java/common/rendering/BackgroundRenderer.java#L175-L179

https://github.com/google-ar/arcore-android-sdk/blob/e12a49bbfc24cb34f365a42e59a210a04c9942c3/samples/augmented_faces_java/app/src/main/java/com/google/ar/core/examples/java/common/rendering/BackgroundRenderer.java#L304-L319

I will have more time to help you on this when Environment Lights PR will be merged but we have a lot of demands on the augmented face.

Thanks

@grassydragon
Copy link
Contributor

By applying to the material, do you mean calling this

I actually meant adding culling: front here: https://github.com/ThomasGorisse/sceneform-android-sdk/blob/e3a594b3458d36b32e1d37e74bba8d0902c5ea8f/ux/sampledata/sceneform_face_mesh_material.mat#L15

And here: https://github.com/ThomasGorisse/sceneform-android-sdk/blob/e3a594b3458d36b32e1d37e74bba8d0902c5ea8f/ux/sampledata/sceneform_face_mesh_occluder_material.mat#L13

But this isn't a good solution and won't be needed after we return these lines: https://github.com/ThomasGorisse/sceneform-android-sdk/blob/e3a594b3458d36b32e1d37e74bba8d0902c5ea8f/core/src/main/java/com/google/ar/sceneform/ArSceneView.java#L171

If you don't mind I will make the camera material double-sided (so we can avoid the missing camera image) and push the fully working sample.

@ThomasGorisse
Copy link
Collaborator

I totally trust you on this part.
We are all equally responsible of this repo so go go go, push and I will make the merge with the other PR.

# Conflicts:
#	core/src/main/java/com/google/ar/sceneform/ArSceneView.java
#	core/src/main/java/com/google/ar/sceneform/rendering/PlaneRenderer.java
#	samples/augmented-images/src/main/java/com/google/ar/sceneform/samples/augmentedimages/MainActivity.java
#	samples/image-texture/src/main/java/com/google/ar/sceneform/samples/imagetexture/MainActivity.java
#	settings.gradle
#	ux/src/main/java/com/google/ar/sceneform/ux/BaseArFragment.java
@@ -157,6 +157,12 @@ public void setTexture(String name, Texture texture) {
}
}

public void setBaseColorTexture(Texture texture) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we still need this method?

Frame frame = arFragment.getArSceneView().getArFrame();

// Get a list of AugmentedFace which are updated on this frame.
Collection<AugmentedFace> augmentedFaces = frame.getUpdatedTrackables(AugmentedFace.class);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This variant seems fine to me.

@grassydragon
Copy link
Contributor

grassydragon commented Sep 24, 2021

Hello!
I merged the master branch and cleaned up the code. Everything works fine but please pay attention to my comments above. Should multiple faces work? I tried but didn't have any success.

@ThomasGorisse ThomasGorisse merged commit de89d56 into master Sep 25, 2021
@ThomasGorisse ThomasGorisse mentioned this pull request Sep 26, 2021
@grassydragon grassydragon deleted the augmentedFaces-sample branch January 8, 2022 12:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Video pivot issue with VideoNode
3 participants