Hello! I am Yingqing He. Nice to meet you!
π¨βπ»β I am currently a PhD student at HKUST. My research focuses on text-to-video generation and multimodal generation.
π« How to reach me: yhebm@connect.ust.hk
π£ Our lab is hiring engineering-oriented research assistants (RA). If you would like to apply, feel free to reach out with your CV!
π§ Other projects:
π
focusing
Ph.D. student @ HKUST;
Contact: yhebm@connect.ust.hk
Pinned Loading
-
AILab-CVC/VideoCrafter
AILab-CVC/VideoCrafter PublicVideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models
-
Awesome-LLMs-meet-Multimodal-Generation
Awesome-LLMs-meet-Multimodal-Generation Publicπ₯π₯π₯ A curated list of papers on LLMs-based multimodal generation (image, video, 3D and audio).
-
ScaleCrafter
ScaleCrafter Public[ICLR 2024 Spotlight] Official implementation of ScaleCrafter for higher-resolution visual generation at inference time.
-
mayuelala/FollowYourPose
mayuelala/FollowYourPose Public[AAAI 2024] Follow-Your-Pose: This repo is the official implementation of "Follow-Your-Pose : Pose-Guided Text-to-Video Generation using Pose-Free Videos"
-
AILab-CVC/Animate-A-Story
AILab-CVC/Animate-A-Story PublicRetrieval-Augmented Video Generation for Telling a Story
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.