diff --git a/docs/source/getting_started/installation/gpu-cuda.md b/docs/source/getting_started/installation/gpu-cuda.md index 1cd513177bf0d..c985c6dd47cc9 100644 --- a/docs/source/getting_started/installation/gpu-cuda.md +++ b/docs/source/getting_started/installation/gpu-cuda.md @@ -2,7 +2,7 @@ # Installation for CUDA -vLLM is a Python library that also contains pre-compiled C++ and CUDA (12.1) binaries. +vLLM is a Python library that also contains pre-compiled C++ and CUDA (12.4) binaries. ## Requirements @@ -43,12 +43,12 @@ Therefore, it is recommended to install vLLM with a **fresh new** environment. I You can install vLLM using either `pip` or `uv pip`: ```console -$ # Install vLLM with CUDA 12.1. +$ # Install vLLM with CUDA 12.4. $ pip install vllm # If you are using pip. $ uv pip install vllm # If you are using uv. ``` -As of now, vLLM's binaries are compiled with CUDA 12.1 and public PyTorch release versions by default. We also provide vLLM binaries compiled with CUDA 11.8 and public PyTorch release versions: +As of now, vLLM's binaries are compiled with CUDA 12.4 and public PyTorch release versions by default. We also provide vLLM binaries compiled with CUDA 11.8 and public PyTorch release versions: ```console $ # Install vLLM with CUDA 11.8.