You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
DreamRunner is implemented using CogVideoX-2B. You can download it here and put it to pretrained_models/CogVideoX-2b.
Running the Code
T2V-Combench
Inference
We provide the plans we used for T2V-ComBench in MotionDirector_SR3AI/t2v-combench/plan.
You can specify the GPUs you want use in MotionDirector_SR3AI/t2v-combench-2b.sh for parallel inference.
Then directly Infer 600 videos on 6 dimensions of T2V-ComBnech with the following script
cd MotionDirector_SR3AI
bash run_bench_2b.sh
The generated videos will be saved at MotionDirector_SR3AI/T2V-CompBench.
Evaluation
Please follow T2V-ComBench for evaluating the generated videos.
Storytell Video Generation
Coming soon!
Citation
If you find our project useful in your research, please cite the following paper:
@article{wang2024dreamrunner,
author = {Wang, Zun and Li, Jialu and Lin, Han and Yoon, Jaehong and Bansal, Mohit},
title = {DreamRunner: Fine-Grained Compositional Story-to-Video Generation with Retrieval-Augmented Motion Adaptation},
journal = {arXiv preprint arXiv:2411.16657},
year = {2024},
url = {https://arxiv.org/abs/2411.16657}
}
``
About
[AAAI 2026] Official implementation of DreamRunner: Fine-Grained Storytelling Video Generation with Retrieval-Augmented Motion Adaptation