This branch is specifically designed for Stable Diffusion WebUI Forge by lllyasviel. See here for how to install forge and this extension. See Update for current status.
This extension aim for integrating AnimateDiff w/ CLI into lllyasviel's Forge Adaption of AUTOMATIC1111 Stable Diffusion WebUI and form the most easy-to-use AI video toolkit. You can generate GIFs in exactly the same way as generating images after enabling this extension.
This extension implements AnimateDiff in a different way. It makes heavy use of Unet Patcher, so that you do not need to reload your model weights if you don't want to, and I do not have to monkey patch anything.
You might also be interested in another extension I created: Segment Anything for Stable Diffusion WebUI. This extension will also be redesigned for forge later.
TusiArt (for users physically inside P.R.China mainland) and TensorArt (for users outside P.R.China mainland) offers online service of this extension.
Update | TODO | Model Zoo | Documentation | Tutorial | Thanks | Star History | Sponsor
v2.0.0-f
in02/05/2023
: t2i, prompt travel, infinite generation, all kinds of optimizations have been proven to be working properly and elegantly.
- Motion lora and ControlNet V2V are still under heavy construction, but I expect to release a working version soon in this week. - When all previous features are working properly, I will soon release SparseCtrl, Magic Animate and Moore Animate Anyone.
- Then this documentation will be completely refactored and an official video tutorial will be available in YouTube and bilibili.
- Later updates will occur in both WebUIs if possible. But due to the significant diffuculty in maintaining Mikubill/sd-webui-controlnet, I will not be able to bring some features to original A1111 WebUI. Such updates will be clearly documented.
I am maintaining a huggingface repo to provide all official models in fp16 & safetensors format. You are highly recommended to use my link. You MUST use my link to download adapter for V3. You may still use the old links if you want, for all models except adapter for V3.
- "Official" models by @guoyww: Google Drive | HuggingFace | CivitAI
- "Stabilized" community models by @manshoety: HuggingFace
- "TemporalDiff" models by @CiaraRowles: HuggingFace
- "HotShotXL" models by @hotshotco: HuggingFace
- How to Use -> Preparation | WebUI | API | Parameters
- Features -> Img2Vid | Prompt Travel | ControlNet V2V | [ Model Spec -> Motion LoRA | V3 | SDXL ]
- Performance -> [ Optimizations -> Attention | FP8 | LCM ] | VRAM | #Batch Size
- Demo -> Basic Usage | Motion LoRA | Prompt Travel | AnimateDiff V3 | AnimateDiff SDXL | ControlNet V2V
TODO
I thank researchers from Shanghai AI Lab, especially @guoyww for creating AnimateDiff. I also thank @neggles and @s9roll7 for creating and improving AnimateDiff CLI Prompt Travel. This extension could not be made possible without these creative works.
I also thank community developers, especially
- @zappityzap who developed the majority of the output features
- @TDS4874 and @opparco for resolving the grey issue which significantly improve the performance
- @lllyasviel for providing technical support of forge
and many others who have contributed to this extension.
I also thank community users, especially @streamline who provided dataset and workflow of ControlNet V2V. His workflow is extremely amazing and definitely worth checking out.
You can sponsor me via WeChat, AliPay or PayPal. You can also support me via ko-fi or afdian.
AliPay | PayPal | |
---|---|---|
![]() |
![]() |
![]() |