Logo
Explore Help
Sign In
xinyun/vllm
1
0
Fork 0
You've already forked vllm
mirror of https://git.datalinker.icu/vllm-project/vllm.git synced 2025-12-09 00:15:24 +08:00
Code Issues Packages Projects Releases Wiki Activity
vllm/.buildkite
History
Yi Liu 0d8a7d8a26
[Compressed Tensors] Add XPU wNa16 support (#29484)
Signed-off-by: yiliu30 <yi4.liu@intel.com>
2025-12-05 22:02:09 +08:00
..
lm-eval-harness
[CI/Build][AMD] Add Llama4 Maverick FP8 to AMD CI (#28695)
2025-12-04 16:07:20 -08:00
performance-benchmarks
[vLLM Benchmark Suite] Add default parameters section and update CPU benchmark cases (#29381)
2025-12-02 09:00:23 +00:00
scripts
[Compressed Tensors] Add XPU wNa16 support (#29484)
2025-12-05 22:02:09 +08:00
check-wheel-size.py
[CI] Raise VLLM_MAX_SIZE_MB to 500 due to failing Build wheel - CUDA 12.9 (#26722)
2025-10-14 10:52:05 -07:00
release-pipeline.yaml
[CI] Renovation of nightly wheel build & generation (take 2) (#29838)
2025-12-01 22:17:10 -08:00
test-amd.yaml
[CI/Build][AMD] Add Llama4 Maverick FP8 to AMD CI (#28695)
2025-12-04 16:07:20 -08:00
test-pipeline.yaml
[CI/Build] Update batch invariant test trigger (#30080)
2025-12-05 00:42:37 +00:00
Powered by Gitea Version: 1.23.1 Page: 1076ms Template: 4ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API