This branch is specifically designed for Stable Diffusion WebUI Forge by lllyasviel. See here for how to install forge and this extension. See Update for current status.
This extension aim for integrating AnimateDiff w/ CLI into lllyasviel's Forge Adaption of AUTOMATIC1111 Stable Diffusion WebUI and form the most easy-to-use AI video toolkit. You can generate GIFs in exactly the same way as generating images after enabling this extension.
This extension implements AnimateDiff in a different way. It makes heavy use of Unet Patcher, so that you do not need to reload your model weights if you don't want to, and I can almostly get rif of monkey-patching WebUI and ControlNet.
You might also be interested in another extension I created: Segment Anything for Stable Diffusion WebUI. This extension will also be redesigned for forge later.
TusiArt (for users physically inside P.R.China mainland) and TensorArt (for others) offers online service of this extension.
Update | TODO | Model Zoo | Documentation | Tutorial | Thanks | Star History | Sponsor
- v2.0.0-f in
02/05/2023
: txt2img, prompt travel, infinite generation, all kinds of optimizations have been proven to be working properly and elegantly. - v2.0.1-f in
02/11/2023
: ControlNet V2V in txt2img panel is working properly and elegantly. You can also try adding mask and inpaint.
- MotionLoRA and i2i batch are still under heavy construction, but I expect to release a working version soon in a week.
- When all previous features are working properly, I will soon release SparseCtrl, Magic Animate and Moore Animate Anyone.
- An official video tutorial will be available on YouTube and bilibili.
- A bunch of new models / advanced parameters / new features may be implented soon.
- All problems in master branch will be fixed soon, but new feature updates for OG A1111 + Mikubill ControlNet extension may be postponded to some time when I have time to rewrite ControlNet extension.
I am maintaining a huggingface repo to provide all official models in fp16 & safetensors format. You are highly recommended to use my link. You MUST use my link to download adapter for V3. You may still use the old links if you want, for all models except adapter for V3.
- "Official" models by @guoyww: Google Drive | HuggingFace | CivitAI
- "Stabilized" community models by @manshoety: HuggingFace
- "TemporalDiff" models by @CiaraRowles: HuggingFace
- "HotShotXL" models by @hotshotco: HuggingFace
- How to Use -> Preparation | WebUI | API | Parameters
- Features -> Img2Vid | Prompt Travel | ControlNet V2V | [ Model Spec -> Motion LoRA | V3 | SDXL ]
- Performance -> [ Optimizations -> Attention | FP8 | LCM ] | VRAM | Batch Size
- Demo -> Basic Usage | Motion LoRA | Prompt Travel | AnimateDiff V3 | AnimateDiff XL | ControlNet V2V
TODO
I thank researchers from Shanghai AI Lab, especially @guoyww for creating AnimateDiff. I also thank @neggles and @s9roll7 for creating and improving AnimateDiff CLI Prompt Travel. This extension could not be made possible without these creative works.
I also thank community developers, especially
- @zappityzap who developed the majority of the output features
- @TDS4874 and @opparco for resolving the grey issue which significantly improve the performance
- @lllyasviel for offering forge technical support
and many others who have contributed to this extension.
I also thank community users, especially @streamline who provided dataset and workflow of ControlNet V2V. His workflow is extremely amazing and definitely worth checking out.
You can sponsor me via WeChat, AliPay or PayPal. You can also support me via ko-fi or afdian.
AliPay | PayPal | |
---|---|---|
![]() |
![]() |
![]() |