mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced 2025-12-09 05:44:59 +08:00
Installation
vLLM supports the following hardware platforms:
Hardware Plugins
The backends below live outside the main vllm repository and follow the
Hardware-Pluggable RFC.
| Accelerator | PyPI / package | Repository |
|---|---|---|
| Google TPU | tpu-inference |
https://github.com/vllm-project/tpu-inference |
| Ascend NPU | vllm-ascend |
https://github.com/vllm-project/vllm-ascend |
| Intel Gaudi (HPU) | N/A, install from source | https://github.com/vllm-project/vllm-gaudi |
| MetaX MACA GPU | N/A, install from source | https://github.com/MetaX-MACA/vLLM-metax |
| Rebellions ATOM / REBEL NPU | vllm-rbln |
https://github.com/rebellions-sw/vllm-rbln |
| IBM Spyre AIU | vllm-spyre |
https://github.com/vllm-project/vllm-spyre |
| Cambricon MLU | vllm-mlu |
https://github.com/Cambricon/vllm-mlu |