2024-12-31 12:45:34 +08:00

43 lines
1.8 KiB
Markdown

<!-- ## **TeaCache4LTX-Video** -->
# TeaCache4LTX-Video
[TeaCache](https://github.com/LiewFeng/TeaCache) can speedup [LTX-Video](https://github.com/Lightricks/LTX-Video) 2x without much visual quality degradation, in a training-free manner. The following video presents the videos generated by TeaCache-LTX-Video with various rel_l1_thresh values: 0 (original), 0.03 (1.6x speedup), 0.05 (2.1x speedup).
https://github.com/user-attachments/assets/1f4cf26c-b8c6-45e3-b402-840bcd6ba00e
## 📈 Inference Latency Comparisons on a Single A800
| LTX-Video-0.9.1 | TeaCache (0.03) | TeaCache (0.05) |
|:--------------------------:|:----------------------------:|:---------------------:|
| ~32 s | ~20 s | ~16 s |
## Installation
```shell
pip install --upgrade diffusers[torch] transformers protobuf tokenizers sentencepiece imageio
```
## Usage
You can modify the thresh in line 187 to obtain your desired trade-off between latency and visul quality. For single-gpu inference, you can use the following command:
```bash
python teacache_ltx.py
```
## Citation
If you find TeaCache is useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry.
```
@article{liu2024timestep,
title={Timestep Embedding Tells: It's Time to Cache for Video Diffusion Model},
author={Liu, Feng and Zhang, Shiwei and Wang, Xiaofeng and Wei, Yujie and Qiu, Haonan and Zhao, Yuzhong and Zhang, Yingya and Ye, Qixiang and Wan, Fang},
journal={arXiv preprint arXiv:2411.19108},
year={2024}
}
```
## Acknowledgements
We would like to thank the contributors to the [LTX-Video](https://github.com/Lightricks/LTX-Video) and [Diffusers](https://github.com/huggingface/diffusers).