From cca91a7a10c8227984311444e4ba901e7c54cf34 Mon Sep 17 00:00:00 2001 From: Reid <61492567+reidliu41@users.noreply.github.com> Date: Wed, 18 Jun 2025 18:30:58 +0800 Subject: [PATCH] [doc] fix the incorrect label (#19787) Signed-off-by: reidliu41 Co-authored-by: reidliu41 --- docs/getting_started/installation/gpu.md | 2 +- docs/getting_started/installation/gpu/cuda.inc.md | 2 -- 2 files changed, 1 insertion(+), 3 deletions(-) diff --git a/docs/getting_started/installation/gpu.md b/docs/getting_started/installation/gpu.md index f8a3acef784fc..1be7557b79e5f 100644 --- a/docs/getting_started/installation/gpu.md +++ b/docs/getting_started/installation/gpu.md @@ -42,7 +42,7 @@ vLLM is a Python library that supports the following GPU variants. Select your G === "NVIDIA CUDA" - --8<-- "docs/getting_started/installation/gpu/cuda.inc.md:create-a-new-python-environment" + --8<-- "docs/getting_started/installation/gpu/cuda.inc.md:set-up-using-python" === "AMD ROCm" diff --git a/docs/getting_started/installation/gpu/cuda.inc.md b/docs/getting_started/installation/gpu/cuda.inc.md index 409efece30888..4503bb443188d 100644 --- a/docs/getting_started/installation/gpu/cuda.inc.md +++ b/docs/getting_started/installation/gpu/cuda.inc.md @@ -10,8 +10,6 @@ vLLM contains pre-compiled C++ and CUDA (12.8) binaries. # --8<-- [end:requirements] # --8<-- [start:set-up-using-python] -### Create a new Python environment - !!! note PyTorch installed via `conda` will statically link `NCCL` library, which can cause issues when vLLM tries to use `NCCL`. See for more details.