| CARVIEW |
Select Language
HTTP/2 200
server: Google Frontend
cache-control: max-age=3600
x-cloud-trace-context: 115792698ca6f8769450582ed38c4cbd
last-modified: Fri, 04 Oct 2024 01:11:59 GMT
content-type: text/html; charset=utf-8
via: 1.1 google, 1.1 varnish, 1.1 varnish, 1.1 varnish
x-frame-options: SAMEORIGIN
content-security-policy: frame-ancestors 'none'
accept-ranges: bytes
age: 144176
date: Thu, 01 Jan 2026 02:11:13 GMT
x-served-by: cache-lga21988-LGA, cache-lga21938-LGA, cache-bom-vanm7210042-BOM
x-cache: MISS, HIT, MISS
x-timer: S1767233473.187295,VS0,VE203
content-length: 48075
[2406.03520] VideoPhy: Evaluating Physical Commonsense for Video Generation
Skip to main content
[v1] Wed, 5 Jun 2024 17:53:55 UTC (19,132 KB)
[v2] Thu, 3 Oct 2024 17:24:40 UTC (21,444 KB)
We gratefully acknowledge support from the Simons Foundation, member institutions, and all contributors.
Donate
Computer Science > Computer Vision and Pattern Recognition
arXiv:2406.03520 (cs)
[Submitted on 5 Jun 2024 (v1), last revised 3 Oct 2024 (this version, v2)]
Title:VideoPhy: Evaluating Physical Commonsense for Video Generation
Authors:Hritik Bansal, Zongyu Lin, Tianyi Xie, Zeshun Zong, Michal Yarom, Yonatan Bitton, Chenfanfu Jiang, Yizhou Sun, Kai-Wei Chang, Aditya Grover
View a PDF of the paper titled VideoPhy: Evaluating Physical Commonsense for Video Generation, by Hritik Bansal and 9 other authors
View PDF
Abstract:Recent advances in internet-scale video data pretraining have led to the development of text-to-video generative models that can create high-quality videos across a broad range of visual concepts, synthesize realistic motions and render complex objects. Hence, these generative models have the potential to become general-purpose simulators of the physical world. However, it is unclear how far we are from this goal with the existing text-to-video generative models. To this end, we present VideoPhy, a benchmark designed to assess whether the generated videos follow physical commonsense for real-world activities (e.g. marbles will roll down when placed on a slanted surface). Specifically, we curate diverse prompts that involve interactions between various material types in the physical world (e.g., solid-solid, solid-fluid, fluid-fluid). We then generate videos conditioned on these captions from diverse state-of-the-art text-to-video generative models, including open models (e.g., CogVideoX) and closed models (e.g., Lumiere, Dream Machine). Our human evaluation reveals that the existing models severely lack the ability to generate videos adhering to the given text prompts, while also lack physical commonsense. Specifically, the best performing model, CogVideoX-5B, generates videos that adhere to the caption and physical laws for 39.6% of the instances. VideoPhy thus highlights that the video generative models are far from accurately simulating the physical world. Finally, we propose an auto-evaluator, VideoCon-Physics, to assess the performance reliably for the newly released models.
| Comments: | 43 pages, 29 figures, 12 tables. Added CogVideo and Dream Machine in v2 |
| Subjects: | Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) |
| Cite as: | arXiv:2406.03520 [cs.CV] |
| (or arXiv:2406.03520v2 [cs.CV] for this version) | |
| https://doi.org/10.48550/arXiv.2406.03520
arXiv-issued DOI via DataCite
|
Submission history
From: Hritik Bansal [view email][v1] Wed, 5 Jun 2024 17:53:55 UTC (19,132 KB)
[v2] Thu, 3 Oct 2024 17:24:40 UTC (21,444 KB)
Full-text links:
Access Paper:
- View PDF
- TeX Source
View a PDF of the paper titled VideoPhy: Evaluating Physical Commonsense for Video Generation, by Hritik Bansal and 9 other authors
Current browse context:
cs.CV
References & Citations
export BibTeX citation
Loading...
Bibliographic and Citation Tools
Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Code, Data and Media Associated with this Article
alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)
Demos
Recommenders and Search Tools
Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.