diff --git a/README.md b/README.md index 23711880b29c..8c6ec7b83d10 100644 --- a/README.md +++ b/README.md @@ -10,7 +10,7 @@ Easy, fast, and cheap LLM serving for everyone
-| Documentation | Blog | +| Documentation | Blog | Discussions |
@@ -18,7 +18,7 @@ Easy, fast, and cheap LLM serving for everyone *Latest News* 🔥 -- [2023/06] We officially released vLLM! vLLM has powered [LMSYS Vicuna and Chatbot Arena](https://chat.lmsys.org) since mid April. Check out our [blog post](). +- [2023/06] We officially released vLLM! vLLM has powered [LMSYS Vicuna and Chatbot Arena](https://chat.lmsys.org) since mid April. Check out our [blog post](https://vllm.ai). --- @@ -62,7 +62,7 @@ Visit our [documentation](https://vllm.readthedocs.io/en/latest/) to get started ## Performance vLLM outperforms HuggingFace Transformers (HF) by up to 24x and Text Generation Inference (TGI) by up to 3.5x, in terms of throughput. -For details, check out our [blog post](). +For details, check out our [blog post](https://vllm.ai).
@@ -79,11 +79,11 @@ For details, check out our [blog post]().
Serving throughput when each request asks for 3 output completions.
diff --git a/docs/source/index.rst b/docs/source/index.rst
index 109e21625697..ab2e17a93b04 100644
--- a/docs/source/index.rst
+++ b/docs/source/index.rst
@@ -40,7 +40,7 @@ vLLM is flexible and easy to use with:
* Streaming outputs
* OpenAI-compatible API server
-For more information, please refer to our `blog post <>`_.
+For more information, please refer to our `blog post