This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-05-01 01:23:34 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
.buildkite
History
Harry Mellor
51c599f0ec
Skip models that cannot currently init on Transformers v5 (
#28471
)
...
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
2025-11-12 23:43:57 +00:00
..
lm-eval-harness
Disable nm-testing models with issues in CI (
#28206
)
2025-11-06 06:19:07 -08:00
performance-benchmarks
[CI/Build][Intel] Enable performance benchmarks for Intel Gaudi 3 (
#26919
)
2025-10-31 07:57:22 +08:00
scripts
VLLM_USE_TRITON_FLASH_ATTN
V0 variable deprecation (
#27611
)
2025-11-11 18:34:36 -08:00
check-wheel-size.py
[CI] Raise VLLM_MAX_SIZE_MB to 500 due to failing Build wheel - CUDA 12.9 (
#26722
)
2025-10-14 10:52:05 -07:00
generate_index.py
…
release-pipeline.yaml
[CPU] Refactor CPU attention backend (
#27954
)
2025-11-12 09:43:06 +08:00
test-amd.yaml
[BugFix] Add test_outputs.py to CI pipeline (
#28466
)
2025-11-11 16:01:30 +00:00
test-pipeline.yaml
Skip models that cannot currently init on Transformers v5 (
#28471
)
2025-11-12 23:43:57 +00:00