This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-03-16 13:07:16 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
.buildkite
History
Zhewen Li
53f6e81dfd
[CI/Build] Fix OpenAI API correctness on AMD CI (
#28022
)
...
Signed-off-by: zhewenli <zhewenli@meta.com>
2025-11-04 07:20:50 +00:00
..
lm-eval-harness
[CI/Build]Add eval config for Qwen3-235B-A22B-Instruct-2507-FP8 (
#27113
)
2025-10-30 07:50:56 +00:00
performance-benchmarks
[CI/Build][Intel] Enable performance benchmarks for Intel Gaudi 3 (
#26919
)
2025-10-31 07:57:22 +08:00
scripts
[CI Test] Add Scheduled Integration Test (
#27765
)
2025-10-30 17:29:26 -07:00
check-wheel-size.py
[CI] Raise VLLM_MAX_SIZE_MB to 500 due to failing Build wheel - CUDA 12.9 (
#26722
)
2025-10-14 10:52:05 -07:00
generate_index.py
[ci/build] Fix abi tag for aarch64 (
#23329
)
2025-08-21 23:32:55 +08:00
release-pipeline.yaml
Remove the tpu docker image nightly build. (
#27997
)
2025-11-04 00:35:54 +00:00
test-amd.yaml
[CI/Build] Fix OpenAI API correctness on AMD CI (
#28022
)
2025-11-04 07:20:50 +00:00
test-pipeline.yaml
Add TP parameter to attention tests (
#27683
)
2025-11-03 13:04:40 -08:00