-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Visualization results #2
Comments
Visualization
Train and test
|
Thank you for providing the training command. I encountered the following issues when using this command: Is this because the dataset was not preprocessed, using the preprocessingipynb file provided in BEAT. |
The "GesAxisAngle_Face_300.bin" is an autoencoder checkpoint for computing the Frechet Distance metrics. I will upload the autoencoder checkpoints later. You can temporally comment them out without computing Frechet Distance metrics during training. |
I encountered a new issue using ae_100.bin in Camn. |
This is the lmdb directory for precomputed HuBERT features. You can refer to the function below to create the HuBERT feature cache for training and testing audios. DiffSHEG/trainers/ddpm_beat_trainer.py Line 1430 in 3ebf305
|
Thank you for this awesome work! I encountered the following issues when using this command: |
Hi @lovemino and @yangdaowu , you can find all the autoencoder weights here: https://drive.google.com/file/d/1Wm2WMlacwStFaciCh7UlhQeyA3E2yEnj/view?usp=sharing . Note that autoencoders are only computing features for Frechet Distances. |
Hi, could you please offer the code related to visualizing BEAT motion in UE? |
Hello, how did you finally solve this problem? I am very grateful. |
Hello, I found that the video results visualized using the method you provided are different from the demonstration, with facial expressions and posture movements in different regions. Can you tell me how to run the training and testing code
The text was updated successfully, but these errors were encountered: