mirror of
https://git.datalinker.icu/ali-vilab/TeaCache
synced 2025-12-09 04:44:23 +08:00
43 lines
1.7 KiB
Markdown
43 lines
1.7 KiB
Markdown
<!-- ## **TeaCache4Mochi** -->
|
|
# TeaCache4Mochi
|
|
|
|
[TeaCache](https://github.com/LiewFeng/TeaCache) can speedup [Mochi](https://github.com/genmoai/mochi) 2x without much visual quality degradation, in a training-free manner. The following video shows the results generated by TeaCache-Mochi with various rel_l1_thresh values: 0 (original), 0.06 (1.5x speedup), 0.09 (2.1x speedup).
|
|
|
|
https://github.com/user-attachments/assets/29a81380-46b3-414f-a96b-6e3acc71b6c4
|
|
|
|
## 📈 Inference Latency Comparisons on a Single A800
|
|
|
|
|
|
| mochi-1-preview | TeaCache (0.06) | TeaCache (0.09) |
|
|
|:--------------------------:|:----------------------------:|:--------------------:|
|
|
| ~30 min | ~20 min | ~14 min |
|
|
|
|
## Installation
|
|
|
|
```shell
|
|
pip install --upgrade diffusers[torch] transformers protobuf tokenizers sentencepiece imageio
|
|
```
|
|
|
|
## Usage
|
|
|
|
You can modify the thresh in line 174 to obtain your desired trade-off between latency and visul quality. For single-gpu inference, you can use the following command:
|
|
|
|
```bash
|
|
python teacache_mochi.py
|
|
```
|
|
|
|
## Citation
|
|
If you find TeaCache is useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry.
|
|
|
|
```
|
|
@article{liu2024timestep,
|
|
title={Timestep Embedding Tells: It's Time to Cache for Video Diffusion Model},
|
|
author={Liu, Feng and Zhang, Shiwei and Wang, Xiaofeng and Wei, Yujie and Qiu, Haonan and Zhao, Yuzhong and Zhang, Yingya and Ye, Qixiang and Wan, Fang},
|
|
journal={arXiv preprint arXiv:2411.19108},
|
|
year={2024}
|
|
}
|
|
```
|
|
|
|
## Acknowledgements
|
|
|
|
We would like to thank the contributors to the [Mochi](https://github.com/genmoai/mochi) and [Diffusers](https://github.com/huggingface/diffusers). |