From 1015296b7947564d8a7258b6f12371f10c955214 Mon Sep 17 00:00:00 2001 From: Reid <61492567+reidliu41@users.noreply.github.com> Date: Sat, 14 Jun 2025 00:25:08 +0800 Subject: [PATCH] [doc][mkdocs] fix the duplicate Supported features sections in GPU docs (#19606) Signed-off-by: reidliu41 Co-authored-by: reidliu41 --- docs/getting_started/installation/gpu/cuda.inc.md | 5 ++++- docs/getting_started/installation/gpu/rocm.inc.md | 5 ++++- docs/getting_started/installation/gpu/xpu.inc.md | 5 ++++- 3 files changed, 12 insertions(+), 3 deletions(-) diff --git a/docs/getting_started/installation/gpu/cuda.inc.md b/docs/getting_started/installation/gpu/cuda.inc.md index 64dccef63d73..409efece3088 100644 --- a/docs/getting_started/installation/gpu/cuda.inc.md +++ b/docs/getting_started/installation/gpu/cuda.inc.md @@ -254,7 +254,10 @@ The latest code can contain bugs and may not be stable. Please use it with cauti See [deployment-docker-build-image-from-source][deployment-docker-build-image-from-source] for instructions on building the Docker image. -## Supported features +# --8<-- [end:build-image-from-source] +# --8<-- [start:supported-features] See [feature-x-hardware][feature-x-hardware] compatibility matrix for feature support information. + +# --8<-- [end:supported-features] # --8<-- [end:extra-information] diff --git a/docs/getting_started/installation/gpu/rocm.inc.md b/docs/getting_started/installation/gpu/rocm.inc.md index 8b7dc6dd09d3..8019fb50f4dd 100644 --- a/docs/getting_started/installation/gpu/rocm.inc.md +++ b/docs/getting_started/installation/gpu/rocm.inc.md @@ -217,7 +217,10 @@ docker run -it \ Where the `` is the location where the model is stored, for example, the weights for llama2 or llama3 models. -## Supported features +# --8<-- [end:build-image-from-source] +# --8<-- [start:supported-features] See [feature-x-hardware][feature-x-hardware] compatibility matrix for feature support information. + +# --8<-- [end:supported-features] # --8<-- [end:extra-information] diff --git a/docs/getting_started/installation/gpu/xpu.inc.md b/docs/getting_started/installation/gpu/xpu.inc.md index bee9a7ebb717..128fff164c3a 100644 --- a/docs/getting_started/installation/gpu/xpu.inc.md +++ b/docs/getting_started/installation/gpu/xpu.inc.md @@ -63,7 +63,8 @@ $ docker run -it \ vllm-xpu-env ``` -## Supported features +# --8<-- [end:build-image-from-source] +# --8<-- [start:supported-features] XPU platform supports **tensor parallel** inference/serving and also supports **pipeline parallel** as a beta feature for online serving. We require Ray as the distributed runtime backend. For example, a reference execution like following: @@ -78,4 +79,6 @@ python -m vllm.entrypoints.openai.api_server \ ``` By default, a ray instance will be launched automatically if no existing one is detected in the system, with `num-gpus` equals to `parallel_config.world_size`. We recommend properly starting a ray cluster before execution, referring to the helper script. + +# --8<-- [end:supported-features] # --8<-- [end:extra-information]