You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed the integration with JoyVasa—great to see advancements in this space! I'm particularly curious about how it can be leveraged for video-to-video lip-sync (focusing solely on lip movements, without eye or head motion). Specifically, I'm wondering what parameters work best to make it a viable replacement for older algorithms like Wav2Lip.
I've experimented with some parameter settings myself and read through discussions in the LivePortrait repo. The community seems to have diverging views—some advocate for relative motion, while others favor absolute motion.
Should I consider retargeting the lips for better accuracy? Additionally, what parameter configuration has empirically been shown to work best in your experience? Any insights would be greatly appreciated.
The text was updated successfully, but these errors were encountered:
I noticed the integration with JoyVasa—great to see advancements in this space! I'm particularly curious about how it can be leveraged for video-to-video lip-sync (focusing solely on lip movements, without eye or head motion). Specifically, I'm wondering what parameters work best to make it a viable replacement for older algorithms like Wav2Lip.
I've experimented with some parameter settings myself and read through discussions in the LivePortrait repo. The community seems to have diverging views—some advocate for relative motion, while others favor absolute motion.
Should I consider retargeting the lips for better accuracy? Additionally, what parameter configuration has empirically been shown to work best in your experience? Any insights would be greatly appreciated.
Currently, it defaults to driving with normalize_lip. How else would you like to retarget the lips? For me, relative motion combined with lip region animation works best.
I noticed the integration with JoyVasa—great to see advancements in this space! I'm particularly curious about how it can be leveraged for video-to-video lip-sync (focusing solely on lip movements, without eye or head motion). Specifically, I'm wondering what parameters work best to make it a viable replacement for older algorithms like Wav2Lip.
I've experimented with some parameter settings myself and read through discussions in the LivePortrait repo. The community seems to have diverging views—some advocate for relative motion, while others favor absolute motion.
Should I consider retargeting the lips for better accuracy? Additionally, what parameter configuration has empirically been shown to work best in your experience? Any insights would be greatly appreciated.
The text was updated successfully, but these errors were encountered: