This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-09 11:54:54 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
.buildkite
History
Zhewen Li
a65a934ebe
[CI/Build] Temporary fix to LM Eval Small Models (
#28324
)
...
Signed-off-by: zhewenli <zhewenli@meta.com>
2025-11-09 21:08:38 +00:00
..
lm-eval-harness
Disable nm-testing models with issues in CI (
#28206
)
2025-11-06 06:19:07 -08:00
performance-benchmarks
[CI/Build][Intel] Enable performance benchmarks for Intel Gaudi 3 (
#26919
)
2025-10-31 07:57:22 +08:00
scripts
[Build] Fix release pipeline failing annotation (
#28272
)
2025-11-07 10:06:45 -08:00
check-wheel-size.py
[CI] Raise VLLM_MAX_SIZE_MB to 500 due to failing Build wheel - CUDA 12.9 (
#26722
)
2025-10-14 10:52:05 -07:00
generate_index.py
[ci/build] Fix abi tag for aarch64 (
#23329
)
2025-08-21 23:32:55 +08:00
release-pipeline.yaml
Remove the tpu docker image nightly build. (
#27997
)
2025-11-04 00:35:54 +00:00
test-amd.yaml
[CI]: Add LMCacheConnector Unit Tests (
#27852
)
2025-11-05 09:45:57 -08:00
test-pipeline.yaml
[CI/Build] Temporary fix to LM Eval Small Models (
#28324
)
2025-11-09 21:08:38 +00:00