Logo
Explore Help
Sign In
xinyun/vllm
1
0
Fork 0
You've already forked vllm
mirror of https://git.datalinker.icu/vllm-project/vllm.git synced 2025-12-10 15:54:56 +08:00
Code Issues Packages Projects Releases Wiki Activity
vllm/docs/getting_started/installation
History
David Xia 93aee29fdb
[doc] split "Other AI Accelerators" tabs (#19708)
2025-06-17 22:05:29 +09:00
..
cpu
[Bugfix] Use cmake 3.26.1 instead of 3.26 to avoid build failure (#19019)
2025-06-03 00:16:17 -07:00
gpu
[doc][mkdocs] fix the duplicate Supported features sections in GPU docs (#19606)
2025-06-13 16:25:08 +00:00
.nav.yml
[doc] split "Other AI Accelerators" tabs (#19708)
2025-06-17 22:05:29 +09:00
aws_neuron.md
[doc] split "Other AI Accelerators" tabs (#19708)
2025-06-17 22:05:29 +09:00
cpu.md
Automatically bind CPU OMP Threads of a rank to CPU ids of a NUMA node. (#17930)
2025-06-10 06:22:05 +00:00
device.template.md
Migrate docs from Sphinx to MkDocs (#18145)
2025-05-23 02:09:53 -07:00
google_tpu.md
[doc] split "Other AI Accelerators" tabs (#19708)
2025-06-17 22:05:29 +09:00
gpu.md
[doc] clarify windows support (#19088)
2025-06-03 21:42:17 +08:00
intel_gaudi.md
[doc] split "Other AI Accelerators" tabs (#19708)
2025-06-17 22:05:29 +09:00
python_env_setup.inc.md
Migrate docs from Sphinx to MkDocs (#18145)
2025-05-23 02:09:53 -07:00
README.md
[doc] split "Other AI Accelerators" tabs (#19708)
2025-06-17 22:05:29 +09:00

README.md

title
Installation

{ #installation-index }

vLLM supports the following hardware platforms:

  • GPU
    • NVIDIA CUDA
    • AMD ROCm
    • Intel XPU
  • CPU
    • Intel/AMD x86
    • ARM AArch64
    • Apple silicon
    • IBM Z (S390X)
  • Google TPU
  • Intel Gaudi
  • AWS Neuron
Powered by Gitea Version: 1.23.1 Page: 407ms Template: 3ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API