From 0080d8329d272f06627286984be8038b0ca9d590 Mon Sep 17 00:00:00 2001 From: Zhuohan Li Date: Wed, 30 Aug 2023 02:17:27 -0700 Subject: [PATCH] Add acknowledgement to a16z grant --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index 6dc6e3d8e0ee4..df23461b39901 100644 --- a/README.md +++ b/README.md @@ -17,6 +17,7 @@ Easy, fast, and cheap LLM serving for everyone --- *Latest News* 🔥 +- [2023/08] We would like to express our sincere gratitude to [Andreessen Horowitz](https://a16z.com/) (a16z) for providing a generous grant to support the open-source development and research of vLLM. - [2023/07] Added support for LLaMA-2! You can run and serve 7B/13B/70B LLaMA-2s on vLLM with a single command! - [2023/06] Serving vLLM On any Cloud with SkyPilot. Check out a 1-click [example](https://github.com/skypilot-org/skypilot/blob/master/llm/vllm) to start the vLLM demo, and the [blog post](https://blog.skypilot.co/serving-llm-24x-faster-on-the-cloud-with-vllm-and-skypilot/) for the story behind vLLM development on the clouds. - [2023/06] We officially released vLLM! FastChat-vLLM integration has powered [LMSYS Vicuna and Chatbot Arena](https://chat.lmsys.org) since mid-April. Check out our [blog post](https://vllm.ai).