diff --git a/TeaCache4Lumina2/README.md b/TeaCache4Lumina2/README.md
new file mode 100644
index 0000000..9cd0e8e
--- /dev/null
+++ b/TeaCache4Lumina2/README.md
@@ -0,0 +1,49 @@
+
+# TeaCache4LuminaT2X
+
+[TeaCache](https://github.com/LiewFeng/TeaCache) can speedup [Lumina-Image-2.0](https://github.com/Alpha-VLLM/Lumina-Image-2.0) 2x without much visual quality degradation, in a training-free manner. The following image shows the results generated by TeaCache-Lumina-Image-2.0 with various rel_l1_thresh values: 0 (original), 0.1 (1.05x speedup), 0.2 (1.15x speedup), 0.3 (1.25x speedup).
+
+
+
+
+
+
+
+
+## 📈 Inference Latency Comparisons on a 4070 laptop(size 1024 x 1536)
+
+
+| Lumina-Image-2.0 | TeaCache (0.1) | TeaCache (0.2) | TeaCache (0.3) |
+|:---------------------------:|:-----------------------------:|:--------------------:|:---------------------:|
+| ~97.74s | ~93.19s | ~84.72s | ~78.43s |
+
+## Installation
+
+```shell
+pip install --upgrade diffusers[torch] transformers protobuf tokenizers sentencepiece
+pip install flash-attn --no-build-isolation
+```
+
+## Usage
+
+You can modify the thresh in line 113 to obtain your desired trade-off between latency and visul quality. For single-gpu inference, you can use the following command:
+
+```bash
+python teacache_lumina2.py
+```
+
+## Citation
+If you find TeaCache is useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry.
+
+```
+@article{liu2024timestep,
+ title={Timestep Embedding Tells: It's Time to Cache for Video Diffusion Model},
+ author={Liu, Feng and Zhang, Shiwei and Wang, Xiaofeng and Wei, Yujie and Qiu, Haonan and Zhao, Yuzhong and Zhang, Yingya and Ye, Qixiang and Wan, Fang},
+ journal={arXiv preprint arXiv:2411.19108},
+ year={2024}
+}
+```
+
+## Acknowledgements
+
+We would like to thank the contributors to the [Lumina-Image-2.0](https://github.com/Alpha-VLLM/Lumina-Image-2.0) and [Diffusers](https://github.com/huggingface/diffusers).