This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-03-16 17:07:11 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
docker
History
liuzhenwei
27ed39a347
[XPU] Upgrade NIXL to remove CUDA dependency (
#26570
)
...
Signed-off-by: zhenwei-intel <zhenwei.liu@intel.com>
2025-10-11 05:15:23 +00:00
..
Dockerfile
[UX] Add FlashInfer as default CUDA dependency (
#26443
)
2025-10-09 14:10:02 -07:00
Dockerfile.cpu
Remove Python 3.9 support ahead of PyTorch 2.9 in v0.11.1 (
#26416
)
2025-10-08 10:40:42 -07:00
Dockerfile.nightly_torch
Bump Flashinfer to v0.4.0 (
#26326
)
2025-10-08 23:58:44 -07:00
Dockerfile.ppc64le
[CI/Build] Fix ppc64le CPU build and tests (
#22443
)
2025-10-11 13:04:42 +08:00
Dockerfile.rocm
[ROCm][CI/Build] Use ROCm7.0 as the base (
#25178
)
2025-09-18 09:36:55 -07:00
Dockerfile.rocm_base
[ROCm][Build] Add support for AMD Ryzen AI MAX / AI 300 Series (
#25908
)
2025-10-01 21:39:49 +00:00
Dockerfile.s390x
[CI/Build] Replace
vllm.entrypoints.openai.api_server
entrypoint with
vllm serve
command (
#25967
)
2025-10-02 10:04:57 -07:00
Dockerfile.tpu
…
Dockerfile.xpu
[XPU] Upgrade NIXL to remove CUDA dependency (
#26570
)
2025-10-11 05:15:23 +00:00