This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-09 00:15:24 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
.buildkite
History
Yi Liu
0d8a7d8a26
[Compressed Tensors] Add XPU
wNa16
support (
#29484
)
...
Signed-off-by: yiliu30 <yi4.liu@intel.com>
2025-12-05 22:02:09 +08:00
..
lm-eval-harness
[CI/Build][AMD] Add Llama4 Maverick FP8 to AMD CI (
#28695
)
2025-12-04 16:07:20 -08:00
performance-benchmarks
[vLLM Benchmark Suite] Add default parameters section and update CPU benchmark cases (
#29381
)
2025-12-02 09:00:23 +00:00
scripts
[Compressed Tensors] Add XPU
wNa16
support (
#29484
)
2025-12-05 22:02:09 +08:00
check-wheel-size.py
[CI] Raise VLLM_MAX_SIZE_MB to 500 due to failing Build wheel - CUDA 12.9 (
#26722
)
2025-10-14 10:52:05 -07:00
release-pipeline.yaml
[CI] Renovation of nightly wheel build & generation (take 2) (
#29838
)
2025-12-01 22:17:10 -08:00
test-amd.yaml
[CI/Build][AMD] Add Llama4 Maverick FP8 to AMD CI (
#28695
)
2025-12-04 16:07:20 -08:00
test-pipeline.yaml
[CI/Build] Update batch invariant test trigger (
#30080
)
2025-12-05 00:42:37 +00:00