You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We use AnimateDiff v2 in our model. Feel free to try other versions of AnimateDiff. Moreover, our model also works in other video generation model
that contains temporal attention module. (e.g., Stable video diffusion and DynamiCrafter).
Download checkpoints for AnimateDiff v2
mm_sd_v15_v2.ckpt (Google Drive / HuggingFace / CivitAI)
and put it in models/Motion_Module/.
Edit video_name in configs\prompts\v2\v2-0-RealisticVision.yaml. The generated samples can be found in samples/ folder.
One-shot camera motion disentanglement
For one-shot camera motion disentanglement, you should prepare a reference video and the corresponding mask (suggest to use SAM) by
editing video_name and mask_save_dir in configs\prompts\v2\v2-1-RealisticVision.yaml. Then run:
The generated samples can be found in samples/ folder.
Few-shot camera motion disentanglement
Coming Soom.
Citation
If you find this code helpful for your research, please cite:
@misc{hu2024motionmaster,
title={MotionMaster: Training-free Camera Motion Transfer For Video Generation},
author={Teng Hu and Jiangning Zhang and Ran Yi and Yating Wang and Hongrui Huang and Jieyu Weng and Yabiao Wang and Lizhuang Ma},
year={2024},
eprint={2404.15789},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
About
[ACM MM24] MotionMaster: Training-free Camera Motion Transfer For Video Generation