mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced 2025-12-09 05:55:01 +08:00
Signed-off-by: Rob Mulla <rob.mulla@gmail.com> Signed-off-by: Rob Mulla <RobMulla@users.noreply.github.com> Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
1.2 KiB
1.2 KiB
Installation
vLLM supports the following hardware platforms:
Hardware Plugins
The backends below live outside the main vllm repository and follow the
Hardware-Pluggable RFC.
| Accelerator | PyPI / package | Repository |
|---|---|---|
| Google TPU | tpu-inference |
https://github.com/vllm-project/tpu-inference |
| Ascend NPU | vllm-ascend |
https://github.com/vllm-project/vllm-ascend |
| Intel Gaudi (HPU) | N/A, install from source | https://github.com/vllm-project/vllm-gaudi |
| MetaX MACA GPU | N/A, install from source | https://github.com/MetaX-MACA/vLLM-metax |
| Rebellions ATOM / REBEL NPU | vllm-rbln |
https://github.com/rebellions-sw/vllm-rbln |
| IBM Spyre AIU | vllm-spyre |
https://github.com/vllm-project/vllm-spyre |
| Cambricon MLU | vllm-mlu |
https://github.com/Cambricon/vllm-mlu |