mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced 2025-12-11 19:27:41 +08:00
Signed-off-by: Tsai, Louie <louie.tsai@intel.com> Signed-off-by: Louie Tsai <louie.tsai@intel.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Co-authored-by: Li, Jiang <bigpyj64@gmail.com>
Installation
vLLM supports the following hardware platforms:
Hardware Plugins
The backends below live outside the main vllm repository and follow the
Hardware-Pluggable RFC.
| Accelerator | PyPI / package | Repository |
|---|---|---|
| Google TPU | tpu-inference |
https://github.com/vllm-project/tpu-inference |
| Ascend NPU | vllm-ascend |
https://github.com/vllm-project/vllm-ascend |
| Intel Gaudi (HPU) | N/A, install from source | https://github.com/vllm-project/vllm-gaudi |
| MetaX MACA GPU | N/A, install from source | https://github.com/MetaX-MACA/vLLM-metax |
| Rebellions ATOM / REBEL NPU | vllm-rbln |
https://github.com/rebellions-sw/vllm-rbln |
| IBM Spyre AIU | vllm-spyre |
https://github.com/vllm-project/vllm-spyre |
| Cambricon MLU | vllm-mlu |
https://github.com/Cambricon/vllm-mlu |