Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference latency #16

Closed
bmwas opened this issue Jan 4, 2025 · 2 comments
Closed

Inference latency #16

bmwas opened this issue Jan 4, 2025 · 2 comments

Comments

@bmwas
Copy link

bmwas commented Jan 4, 2025

What is the inference latency of the model as compared to other SOTA models such as wav2lip, Musetalk etc?

Especially , is the inference latency small enough for realtime use cases?

Thank you

@chunyu-li
Copy link
Collaborator

This a test for reference:

Machine: A100
Inference precision: fp16
The duration of the generated video: 10s
Batch size: 1
Consecutive frame length: 16
DDIM steps: 20
Time taken: 28 seconds (excluding landmark detection and affine transformation, only considering model forward time)

@bmwas
Copy link
Author

bmwas commented Jan 6, 2025

@chunyu-li That's very helpful. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants