Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Visualization results #2

Open
yangdaowu opened this issue Apr 30, 2024 · 9 comments
Open

Visualization results #2

yangdaowu opened this issue Apr 30, 2024 · 9 comments

Comments

@yangdaowu
Copy link

Hello, I found that the video results visualized using the method you provided are different from the demonstration, with facial expressions and posture movements in different regions. Can you tell me how to run the training and testing code

@JeremyCJM
Copy link
Owner

Visualization

  • Yes, the given visualization script for BEAT is directly visualizing generated expressions and gestures via Face meshes and body skeletons in Blender.
  • In our paper, we visualize the generated expressions and gestures of BEAT using MetaHuman in Unreal Engine 5. This is realized by retargeting the current BVH skeleton to MetaHuman skeleton for gesture animation, and feeding the expression parameters to ARKit module of MetaHuman for expression animation.

Train and test

  • I have updated the README.md and added the commands for train and test. Please also reinstall the new environment.
  • For dataset preprocessing, please refer to Training code #1 (comment) and original BEAT GitHub repo. We extract motion clips of SHOW data in a similar way as BEAT via lmdb.

@yangdaowu
Copy link
Author

Thank you for providing the training command. I encountered the following issues when using this command:
No such file or directory: 'data/BEAT/beat_cache/beat_4english_15_141/weights/GesAxisAngle_Face_300.bin'

Is this because the dataset was not preprocessed, using the preprocessingipynb file provided in BEAT.

@JeremyCJM
Copy link
Owner

The "GesAxisAngle_Face_300.bin" is an autoencoder checkpoint for computing the Frechet Distance metrics. I will upload the autoencoder checkpoints later. You can temporally comment them out without computing Frechet Distance metrics during training.

@yangdaowu
Copy link
Author

I encountered a new issue using ae_100.bin in Camn.
Traceback (most recent call last):
File "/home/ydw/anaconda3/envs/Talko/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 74, in _wrap
fn(i, *args)
File "/media/ydw/sda/PycharmProjects/Diffsheg/runner.py", line 303, in main_worker
train_dataset = import(f"datasets.{opt.dataset_name}", fromlist=["something"]).BeatDataset(opt, "train")
File "/media/ydw/sda/PycharmProjects/Diffsheg/datasets/beat.py", line 116, in init
self.aud_lmdb_env = lmdb.open(self.aud_feat_path, readonly=True, lock=False)
lmdb.Error: data/BEAT/beat_cache/beat_4english_15_141/train/aud_feat_cache/hubert_large_ls960_ft: No such file or directory

@JeremyCJM
Copy link
Owner

This is the lmdb directory for precomputed HuBERT features. You can refer to the function below to create the HuBERT feature cache for training and testing audios.

def get_hubert_from_16k_speech_long(hubert_model, wav2vec2_processor, speech, device="cuda:0"):

@lovemino
Copy link

Thank you for this awesome work! I encountered the following issues when using this command:
No such file: ges_axis_angle_300.bin
I noticed that the metrics in the experimental results you provided differ from those in Beat. I am wondering if you used different MotionAutoencoder for testing. If so, could you kindly provide the autoencoder checkpoint file? I would greatly appreciate it.

@JeremyCJM
Copy link
Owner

JeremyCJM commented Jul 1, 2024

Hi @lovemino and @yangdaowu , you can find all the autoencoder weights here: https://drive.google.com/file/d/1Wm2WMlacwStFaciCh7UlhQeyA3E2yEnj/view?usp=sharing . Note that autoencoders are only computing features for Frechet Distances.

@ylhua
Copy link

ylhua commented Aug 6, 2024

Hi, could you please offer the code related to visualizing BEAT motion in UE?

@HellAngel18
Copy link

我在 Camn 中使用 ae_100.bin 时遇到了一个新问题。回溯(最近一次调用):文件“/home/ydw/anaconda3/envs/Talko/lib/python3.8/site-packages/torch/multiprocessing/spawn.py”,第 74 行,_wrap fn(i, *args)文件“/media/ydw/sda/PycharmProjects/Diffsheg/runner.py”,第 303 行,main_worker train_dataset = import(f“datasets.{opt.dataset_name}“, fromlist=[”something“]) 中。BeatDataset(opt, “train”) 文件 “/media/ydw/sda/PycharmProjects/Diffsheg/datasets/beat.py”,第 116 行,init self.aud_lmdb_env = lmdb.open(self.aud_feat_path, readonly=True, lock=False) lmdb。错误:data/BEAT/beat_cache/beat_4english_15_141/train/aud_feat_cache/hubert_large_ls960_ft:没有这样的文件或目录

Hello, how did you finally solve this problem? I am very grateful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants