Rob Mulla 70bfbd7b16
Docs update tpu install instructions (#27824)
Signed-off-by: Rob Mulla <rob.mulla@gmail.com>
Signed-off-by: Rob Mulla <RobMulla@users.noreply.github.com>
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
2025-10-31 10:29:55 -07:00

1.2 KiB

Installation

vLLM supports the following hardware platforms:

Hardware Plugins

The backends below live outside the main vllm repository and follow the Hardware-Pluggable RFC.

Accelerator PyPI / package Repository
Google TPU tpu-inference https://github.com/vllm-project/tpu-inference
Ascend NPU vllm-ascend https://github.com/vllm-project/vllm-ascend
Intel Gaudi (HPU) N/A, install from source https://github.com/vllm-project/vllm-gaudi
MetaX MACA GPU N/A, install from source https://github.com/MetaX-MACA/vLLM-metax
Rebellions ATOM / REBEL NPU vllm-rbln https://github.com/rebellions-sw/vllm-rbln
IBM Spyre AIU vllm-spyre https://github.com/vllm-project/vllm-spyre
Cambricon MLU vllm-mlu https://github.com/Cambricon/vllm-mlu