You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You can tune simthreshd1 and cdfthreshd to balance between attention accuracy (higher values) and sparsity (lower values). However, for optimal accuracy and sparsity performance, we recommend running a tuning process before inference, as described below.
CogVideoX
Tuning:
# sequential tuning
python evaluate/cogvideo_example.py --use_spas_sage_attn --model_out_path evaluate/models_dict/CogVideoX-2b_0.06_0.07.pt --tune
# parallel tuning, this will use all gpu available on the machine
python evaluate/cogvideo_example.py --use_spas_sage_attn --model_out_path evaluate/models_dict/CogVideoX-2b_0.06_0.07.pt --tune --parallel_tune
Inference:
# `--compile` is optional and will slow the first time inference.
python evaluate/cogvideo_example.py --use_spas_sage_attn --model_out_path evaluate/models_dict/CogVideoX-2b_0.06_0.07.pt --compile
Note:
We provide pre-tuned hyper-parameters CogVideoX-2b_0.06_0.07.pt that allow running the inference script directly. However, for better performance in both speed and quality, we recommend re-tuning because the provided hyper-parameters are tuned with SpargeAttn based on SageAttention, whereas the default API is based on SageAttention2 now.
Note:--compile is optional and will further accelerate video generation but bring an overhead for the first video generation.
LLama
The tuning and inference usage is similar to CogVideoX.
Supported models
Here’s a list of the tuned models so far, go to hugginface to see all tuned ckpt.
Our approach is universal, and we warmly welcome contributions! Feel free to submit a pull request to support more models. 🚀
Note: All experiments in the above Table and our paper used SpargeAttn based on SageAttention. An updated implementation based on SageAttention2, is available now. It further offers a 30% speedup.
The quality of video generation on Mochi.
End-to-end performance of NIAH.
Citation
If you use this code or find our work valuable, please cite:
@inproceedings{zhang2025spargeattn,
title={Spargeattn: Accurate sparse attention accelerating any model inference},
author={Zhang, Jintao and Xiang, Chendong and Huang, Haofeng and Wei, Jia and Xi, Haocheng and Zhu, Jun and Chen, Jianfei},
booktitle={International Conference on Machine Learning (ICML)},
year={2025}
}
@inproceedings{zhang2025sageattention,
title={SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration},
author={Zhang, Jintao and Wei, Jia and Zhang, Pengle and Zhu, Jun and Chen, Jianfei},
booktitle={International Conference on Learning Representations (ICLR)},
year={2025}
}
@inproceedings{zhang2024sageattention2,
title={Sageattention2: Efficient attention with thorough outlier smoothing and per-thread int4 quantization},
author={Zhang, Jintao and Huang, Haofeng and Zhang, Pengle and Wei, Jia and Zhu, Jun and Chen, Jianfei},
booktitle={International Conference on Machine Learning (ICML)},
year={2025}
}
About
SpargeAttention: A training-free sparse attention that can accelerate any model inference.