This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-09 03:54:57 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
.buildkite
History
Michael Goin
f8a0acbdbe
[CI] Enable Blackwell Llama4 MoE tests (
#26731
)
...
Signed-off-by: mgoin <mgoin64@gmail.com>
2025-10-15 21:02:57 -06:00
..
lm-eval-harness
[CI/Build] Add Qwen2.5-VL-7B-Instruct ChartQA Accuracy Tests in CI (
#21810
)
2025-10-15 08:09:56 +00:00
nightly-benchmarks
[V0 Deprecation] Remove
VLLM_USE_V1
from docs and scripts (
#26336
)
2025-10-07 16:46:44 +08:00
scripts
[XPU] Upgrade NIXL to remove CUDA dependency (
#26570
)
2025-10-11 05:15:23 +00:00
check-wheel-size.py
[CI] Raise VLLM_MAX_SIZE_MB to 500 due to failing Build wheel - CUDA 12.9 (
#26722
)
2025-10-14 10:52:05 -07:00
generate_index.py
[ci/build] Fix abi tag for aarch64 (
#23329
)
2025-08-21 23:32:55 +08:00
release-pipeline.yaml
[CI][Release][Arm64]: Build arm64 release for gpu arch 8.9 (
#26698
)
2025-10-13 18:42:12 +00:00
test-amd.yaml
[ci] Adjusting AMD test composition 2025-10-14 (
#26852
)
2025-10-15 23:52:13 +00:00
test-pipeline.yaml
[CI] Enable Blackwell Llama4 MoE tests (
#26731
)
2025-10-15 21:02:57 -06:00