# TeaCache4FLUX [TeaCache](https://github.com/LiewFeng/TeaCache) can speedup [FLUX](https://github.com/black-forest-labs/flux) 2x without much visual quality degradation, in a training-free manner. The following image shows the results generated by TeaCache-FLUX with various `rel_l1_thresh` values: 0 (original), 0.25 (1.5x speedup), 0.4 (1.8x speedup), 0.6 (2.0x speedup), and 0.8 (2.25x speedup). ![visualization](../assets/TeaCache4FLUX.png) ## 📈 Inference Latency Comparisons on a Single A800 | FLUX.1 [dev] | TeaCache (0.25) | TeaCache (0.4) | TeaCache (0.6) | TeaCache (0.8) | |:-----------------------:|:----------------------------:|:--------------------:|:---------------------:|:---------------------:| | ~18 s | ~12 s | ~10 s | ~9 s | ~8 s | ## Installation ```shell pip install --upgrade diffusers[torch] transformers protobuf tokenizers sentencepiece ``` ## Usage You can modify the `rel_l1_thresh` in line 320 to obtain your desired trade-off between latency and visul quality. For single-gpu inference, you can use the following command: ```bash python teacache_flux.py ``` ## Citation If you find TeaCache is useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry. ``` @article{liu2024timestep, title={Timestep Embedding Tells: It's Time to Cache for Video Diffusion Model}, author={Liu, Feng and Zhang, Shiwei and Wang, Xiaofeng and Wei, Yujie and Qiu, Haonan and Zhao, Yuzhong and Zhang, Yingya and Ye, Qixiang and Wan, Fang}, journal={arXiv preprint arXiv:2411.19108}, year={2024} } ``` ## Acknowledgements We would like to thank the contributors to the [FLUX](https://github.com/black-forest-labs/flux) and [Diffusers](https://github.com/huggingface/diffusers).