ioana ghiban 1bb17ecb39
[CPU Backend] [Doc]: Update Installation Docs for CPUs (#29868)
Signed-off-by: Ioana Ghiban <ioana.ghiban@arm.com>
2025-12-03 13:33:50 +00:00
..

Installation

vLLM supports the following hardware platforms:

Hardware Plugins

The backends below live outside the main vllm repository and follow the Hardware-Pluggable RFC.

Accelerator PyPI / package Repository
Google TPU tpu-inference https://github.com/vllm-project/tpu-inference
Ascend NPU vllm-ascend https://github.com/vllm-project/vllm-ascend
Intel Gaudi (HPU) N/A, install from source https://github.com/vllm-project/vllm-gaudi
MetaX MACA GPU N/A, install from source https://github.com/MetaX-MACA/vLLM-metax
Rebellions ATOM / REBEL NPU vllm-rbln https://github.com/rebellions-sw/vllm-rbln
IBM Spyre AIU vllm-spyre https://github.com/vllm-project/vllm-spyre
Cambricon MLU vllm-mlu https://github.com/Cambricon/vllm-mlu