Richard Zou e1279ef00f
[Docs] Update instructions for how to using existing torch binary (#24892)
Signed-off-by: Richard Zou <zou3519@gmail.com>
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
2025-09-16 02:25:50 +00:00
..

Installation

vLLM supports the following hardware platforms:

Hardware Plugins

The backends below live outside the main vllm repository and follow the Hardware-Pluggable RFC.

Accelerator PyPI / package Repository
Ascend NPU vllm-ascend https://github.com/vllm-project/vllm-ascend
Intel Gaudi (HPU) N/A, install from source https://github.com/vllm-project/vllm-gaudi
MetaX MACA GPU N/A, install from source https://github.com/MetaX-MACA/vLLM-metax
Rebellions ATOM / REBEL NPU vllm-rbln https://github.com/rebellions-sw/vllm-rbln
IBM Spyre AIU vllm-spyre https://github.com/vllm-project/vllm-spyre