Logo
Explore Help
Sign In
xinyun/vllm
1
0
Fork 0
You've already forked vllm
mirror of https://git.datalinker.icu/vllm-project/vllm.git synced 2025-12-09 05:34:55 +08:00
Code Issues Packages Projects Releases Wiki Activity
vllm/.buildkite/scripts/hardware_ci
History
Yi Liu 0d8a7d8a26
[Compressed Tensors] Add XPU wNa16 support (#29484)
Signed-off-by: yiliu30 <yi4.liu@intel.com>
2025-12-05 22:02:09 +08:00
..
run-amd-test.sh
[ci][amd] fix basic models extra init test (#28676)
2025-11-14 02:44:36 +00:00
run-cpu-test-arm.sh
[cpu][fix] Fix Arm CI tests (#29552)
2025-11-27 13:09:41 +08:00
run-cpu-test-ppc64le.sh
Update Dockerfile to use gcc-toolset-14 and fix test case failures on power (ppc64le) (#28957)
2025-11-21 12:24:09 +00:00
run-cpu-test-s390x.sh
…
run-cpu-test.sh
[CPU] Update torch 2.9.1 for CPU backend (#29664)
2025-11-28 13:37:54 +00:00
run-gh200-test.sh
[CI/Build] get rid of unused VLLM_FA_CMAKE_GPU_ARCHES (#21599)
2025-07-31 15:00:08 +08:00
run-hpu-test.sh
…
run-npu-test.sh
[Misc] Add docker build env for Ascend NPU (#30015)
2025-12-03 19:53:00 -08:00
run-tpu-v1-test-part2.sh
[V0 Deprecation] Remove VLLM_USE_V1 from docs and scripts (#26336)
2025-10-07 16:46:44 +08:00
run-tpu-v1-test.sh
[V0 Deprecation] Remove VLLM_USE_V1 from docs and scripts (#26336)
2025-10-07 16:46:44 +08:00
run-xpu-test.sh
[Compressed Tensors] Add XPU wNa16 support (#29484)
2025-12-05 22:02:09 +08:00
Powered by Gitea Version: 1.23.1 Page: 1120ms Template: 3ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API