Li, Jiang cfb7e55515
[Doc][CPU] Update CPU doc (#30765)
Signed-off-by: jiang1.li <jiang1.li@intel.com>
Signed-off-by: Li, Jiang <bigpyj64@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-12-18 04:59:09 +00:00
..
2025-12-18 04:59:09 +00:00

Installation

vLLM supports the following hardware platforms:

Hardware Plugins

The backends below live outside the main vllm repository and follow the Hardware-Pluggable RFC.

Accelerator PyPI / package Repository
Google TPU tpu-inference https://github.com/vllm-project/tpu-inference
Ascend NPU vllm-ascend https://github.com/vllm-project/vllm-ascend
Intel Gaudi (HPU) N/A, install from source https://github.com/vllm-project/vllm-gaudi
MetaX MACA GPU N/A, install from source https://github.com/MetaX-MACA/vLLM-metax
Rebellions ATOM / REBEL NPU vllm-rbln https://github.com/rebellions-sw/vllm-rbln
IBM Spyre AIU vllm-spyre https://github.com/vllm-project/vllm-spyre
Cambricon MLU vllm-mlu https://github.com/Cambricon/vllm-mlu
Baidu Kunlun XPU N/A, install from source https://github.com/baidu/vLLM-Kunlun