This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-04-29 11:47:09 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
.buildkite
History
Zhewen Li
83fd49b1fc
[CI/Build][Bugfix]Fix Quantized Models Test on AMD (
#27712
)
...
Signed-off-by: zhewenli <zhewenli@meta.com>
2025-10-29 06:27:30 +00:00
..
lm-eval-harness
[CI/Build] Update Llama4 eval yaml (
#27070
)
2025-10-17 04:59:47 +00:00
nightly-benchmarks
add SLA information into comparison graph for vLLM Benchmark Suite (
#25525
)
2025-10-23 08:04:59 +00:00
scripts
Update release pipeline for PyTorch 2.9.0 (
#27303
)
2025-10-22 09:18:01 +00:00
check-wheel-size.py
[CI] Raise VLLM_MAX_SIZE_MB to 500 due to failing Build wheel - CUDA 12.9 (
#26722
)
2025-10-14 10:52:05 -07:00
generate_index.py
[ci/build] Fix abi tag for aarch64 (
#23329
)
2025-08-21 23:32:55 +08:00
release-pipeline.yaml
Fix AArch64 CPU Docker pipeline (
#27331
)
2025-10-24 05:11:01 -07:00
test-amd.yaml
[CI/Build][Bugfix]Fix Quantized Models Test on AMD (
#27712
)
2025-10-29 06:27:30 +00:00
test-pipeline.yaml
[Bugfix][CI] Fix v1 attention backend tests and add CI coverage (
#26597
)
2025-10-28 11:42:05 -04:00