diff --git a/docs/getting_started/installation/gpu.md b/docs/getting_started/installation/gpu.md index f8a3acef784fc..1be7557b79e5f 100644 --- a/docs/getting_started/installation/gpu.md +++ b/docs/getting_started/installation/gpu.md @@ -42,7 +42,7 @@ vLLM is a Python library that supports the following GPU variants. Select your G === "NVIDIA CUDA" - --8<-- "docs/getting_started/installation/gpu/cuda.inc.md:create-a-new-python-environment" + --8<-- "docs/getting_started/installation/gpu/cuda.inc.md:set-up-using-python" === "AMD ROCm" diff --git a/docs/getting_started/installation/gpu/cuda.inc.md b/docs/getting_started/installation/gpu/cuda.inc.md index 409efece30888..4503bb443188d 100644 --- a/docs/getting_started/installation/gpu/cuda.inc.md +++ b/docs/getting_started/installation/gpu/cuda.inc.md @@ -10,8 +10,6 @@ vLLM contains pre-compiled C++ and CUDA (12.8) binaries. # --8<-- [end:requirements] # --8<-- [start:set-up-using-python] -### Create a new Python environment - !!! note PyTorch installed via `conda` will statically link `NCCL` library, which can cause issues when vLLM tries to use `NCCL`. See for more details.