Skip to content

Latest commit

 

History

History
67 lines (48 loc) · 5.1 KB

README.md

File metadata and controls

67 lines (48 loc) · 5.1 KB

Movie Gen Bench

Movie Gen is a cast of foundation models that generates high-quality, 1080p HD videos with different aspect ratios and synchronized audio. Here, we introduce our evaluation benchmark "Movie Gen Bench", which includes Movie Gen Video Bench (Section 3.5.2) and Movie Gen Audio Bench (Section 6.3.2), as detailed in the Movie Gen technical report.

To enable fair and easy comparison to Movie Gen for future works on these evaluation benchmarks, we additionally release the non cherry-picked generated videos from Movie Gen on both Movie Gen Video Bench and Movie Gen Audio Bench.

Table of Contents

Movie Gen Video Bench

Movie Gen Video Bench consists of 1003 prompts that cover all the different testing aspects/concepts:

  1. human activity (limb and mouth motion, emotions, etc.)
  2. animals
  3. nature and scenery
  4. physics (fluid dynamics, gravity, acceleration, collisions, explosions, etc.)
  5. unusual subjects and unusual activities.

Besides a comprehensive coverage of different key testing aspects, the prompts also have a good coverage of high/medium/low motion levels at the same time.

Example

Prompt Concept Distribution

Download Video Benchmark

The prompt list benchmark/MovieGenVideoBench.txt is included in this repo, we additionally release the testing concepts and motion level tags for each prompt in benchmark/MovieGenVideoBenchWithTag.csv. The corresponding generated videos (by Movie Gen) can be downloaded via this link.

Movie Gen Video Bench is also available on Hugging Face.

Movie Gen Audio Bench

Movie Gen Audio Bench consists of 527 generated videos and associated sound effects and music prompts

  • It covers various ambient environments (e.g., indoor, urban, nature,transportation) and sound effects (e.g., human, animal, objects).
  • Movie Gen Video is used to generate videos. We additionally include video prompts used to generate these videos.
  • It can be used to evaluate sound effect generation and joint sound effect and background music generation.
  • It can be used to evaluate video-to-audio and (text+video)-to-audio generation.

Download Audio Benchmark

benchmark/MovieGenAudioBenchSfx.jsonl includes the sound effect prompts used for sound effect generation, and additionally video prompts used for generating testing videos. Videos with audio and prompts can be downloaded via this link.

benchmark/MovieGenAudioBenchSfxMusic.jsonl includes the sound effect and music prompts used for joint sound effect and background music generation, and additionally video prompts used for generating testing videos. Videos with audio and prompts can be downloaded via this link.

License

The model is licensed under the CC-BY-NC license

Citation

If you find Movie Gen Bench useful, please consider citing:

@misc{polyak2024moviegencastmedia,
      title={Movie Gen: A Cast of Media Foundation Models}, 
      author={Adam Polyak and Amit Zohar and Andrew Brown and Andros Tjandra and Animesh Sinha and Ann Lee and Apoorv Vyas and Bowen Shi and Chih-Yao Ma and Ching-Yao Chuang and David Yan and Dhruv Choudhary and Dingkang Wang and Geet Sethi and Guan Pang and Haoyu Ma and Ishan Misra and Ji Hou and Jialiang Wang and Kiran Jagadeesh and Kunpeng Li and Luxin Zhang and Mannat Singh and Mary Williamson and Matt Le and Matthew Yu and Mitesh Kumar Singh and Peizhao Zhang and Peter Vajda and Quentin Duval and Rohit Girdhar and Roshan Sumbaly and Sai Saketh Rambhatla and Sam Tsai and Samaneh Azadi and Samyak Datta and Sanyuan Chen and Sean Bell and Sharadh Ramaswamy and Shelly Sheynin and Siddharth Bhattacharya and Simran Motwani and Tao Xu and Tianhe Li and Tingbo Hou and Wei-Ning Hsu and Xi Yin and Xiaoliang Dai and Yaniv Taigman and Yaqiao Luo and Yen-Cheng Liu and Yi-Chiao Wu and Yue Zhao and Yuval Kirstain and Zecheng He and Zijian He and Albert Pumarola and Ali Thabet and Artsiom Sanakoyeu and Arun Mallya and Baishan Guo and Boris Araya and Breena Kerr and Carleigh Wood and Ce Liu and Cen Peng and Dimitry Vengertsev and Edgar Schonfeld and Elliot Blanchard and Felix Juefei-Xu and Fraylie Nord and Jeff Liang and John Hoffman and Jonas Kohler and Kaolin Fire and Karthik Sivakumar and Lawrence Chen and Licheng Yu and Luya Gao and Markos Georgopoulos and Rashel Moritz and Sara K. Sampson and Shikai Li and Simone Parmeggiani and Steve Fine and Tara Fowler and Vladan Petrovic and Yuming Du},
      year={2024},
      eprint={2410.13720},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2410.13720}, 
}