diff --git a/docs/getting_started/installation/README.md b/docs/getting_started/installation/README.md index a252343dcee8..f6ecceb85d86 100644 --- a/docs/getting_started/installation/README.md +++ b/docs/getting_started/installation/README.md @@ -14,3 +14,16 @@ vLLM supports the following hardware platforms: - [Google TPU](google_tpu.md) - [Intel Gaudi](intel_gaudi.md) - [AWS Neuron](aws_neuron.md) + +## Hardware Plugins + +The backends below live **outside** the main `vllm` repository and follow the +[Hardware-Pluggable RFC](../design/plugin_system.md). + +| Accelerator | PyPI / package | Repository | +|-------------|----------------|------------| +| Ascend NPU | `vllm-ascend` | | +| Intel Gaudi (HPU) | N/A, install from source | | +| MetaX MACA GPU | N/A, install from source | | +| Rebellions ATOM / REBEL NPU | `vllm-rbln` | | +| IBM Spyre AIU | `vllm-spyre` | |