From c1d1875ba347d0f7d80d1d80794ba85c5ba01d79 Mon Sep 17 00:00:00 2001 From: Michael Goin Date: Tue, 7 Jan 2025 17:29:07 -0500 Subject: [PATCH] Updates docs with correction about default cuda version Correct 12.1 --> 12.4 --- docs/source/getting_started/installation/gpu-cuda.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/source/getting_started/installation/gpu-cuda.md b/docs/source/getting_started/installation/gpu-cuda.md index 1cd513177bf0d..c985c6dd47cc9 100644 --- a/docs/source/getting_started/installation/gpu-cuda.md +++ b/docs/source/getting_started/installation/gpu-cuda.md @@ -2,7 +2,7 @@ # Installation for CUDA -vLLM is a Python library that also contains pre-compiled C++ and CUDA (12.1) binaries. +vLLM is a Python library that also contains pre-compiled C++ and CUDA (12.4) binaries. ## Requirements @@ -43,12 +43,12 @@ Therefore, it is recommended to install vLLM with a **fresh new** environment. I You can install vLLM using either `pip` or `uv pip`: ```console -$ # Install vLLM with CUDA 12.1. +$ # Install vLLM with CUDA 12.4. $ pip install vllm # If you are using pip. $ uv pip install vllm # If you are using uv. ``` -As of now, vLLM's binaries are compiled with CUDA 12.1 and public PyTorch release versions by default. We also provide vLLM binaries compiled with CUDA 11.8 and public PyTorch release versions: +As of now, vLLM's binaries are compiled with CUDA 12.4 and public PyTorch release versions by default. We also provide vLLM binaries compiled with CUDA 11.8 and public PyTorch release versions: ```console $ # Install vLLM with CUDA 11.8.