2.5 KiB
TeaCache4CogVideoX1.5
TeaCache can speedup CogVideoX1.5 1.8x without much visual quality degradation, in a training-free manner. The following video shows the results generated by TeaCache-CogVideoX1.5 with various rel_l1_thresh values: 0 (original), 0.1 (1.3x speedup), 0.2 (1.8x speedup), and 0.3(2.1x speedup).
https://github.com/user-attachments/assets/c444b850-3252-4b37-ad4a-122d389218d9
📈 Inference Latency Comparisons on a Single H100 GPU
| CogVideoX1.5 | TeaCache (0.1) | TeaCache (0.2) | TeaCache (0.3) |
|---|---|---|---|
| ~465 s | ~372 s | ~261 s | ~223 s |
Usage
Follow CogVideoX to clone the repo and finish the installation, then you can modify the rel_l1_thresh to obtain your desired trade-off between latency and visul quality, and change the ckpts_path, prompt, image to customize your identity-preserving video.
For single-gpu inference, you can use the following command:
cd TeaCache4CogVideoX1.5
python3 teacache_sample_video.py \
--rel_l1_thresh 0.2 \
--ckpts_path THUDM/CogVideoX1.5-5B \
--prompt "A clear, turquoise river flows through a rocky canyon, cascading over a small waterfall and forming a pool of water at the bottom.The river is the main focus of the scene, with its clear water reflecting the surrounding trees and rocks. The canyon walls are steep and rocky, with some vegetation growing on them. The trees are mostly pine trees, with their green needles contrasting with the brown and gray rocks. The overall tone of the scene is one of peace and tranquility.", help='Description of the video for the model to generate." \
--seed 42 \
--num_inference_steps 50 \
--output_path ./teacache_results
Citation
If you find TeaCache is useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry.
@article{liu2024timestep,
title={Timestep Embedding Tells: It's Time to Cache for Video Diffusion Model},
author={Liu, Feng and Zhang, Shiwei and Wang, Xiaofeng and Wei, Yujie and Qiu, Haonan and Zhao, Yuzhong and Zhang, Yingya and Ye, Qixiang and Wan, Fang},
journal={arXiv preprint arXiv:2411.19108},
year={2024}
}
Acknowledgements
We would like to thank the contributors to the CogVideoX and Diffusers.