This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-09 07:24:54 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
.buildkite
History
Michael Goin
04b5f9802d
[CI] Raise VLLM_MAX_SIZE_MB to 500 due to failing Build wheel - CUDA 12.9 (
#26722
)
...
Signed-off-by: mgoin <mgoin64@gmail.com>
2025-10-14 10:52:05 -07:00
..
lm-eval-harness
[Deprecation] Remove
prompt_token_ids
arg fallback in
LLM.generate
and
LLM.embed
(
#18800
)
2025-08-22 10:56:57 +08:00
nightly-benchmarks
[V0 Deprecation] Remove
VLLM_USE_V1
from docs and scripts (
#26336
)
2025-10-07 16:46:44 +08:00
scripts
[XPU] Upgrade NIXL to remove CUDA dependency (
#26570
)
2025-10-11 05:15:23 +00:00
check-wheel-size.py
[CI] Raise VLLM_MAX_SIZE_MB to 500 due to failing Build wheel - CUDA 12.9 (
#26722
)
2025-10-14 10:52:05 -07:00
generate_index.py
[ci/build] Fix abi tag for aarch64 (
#23329
)
2025-08-21 23:32:55 +08:00
release-pipeline.yaml
[CI][Release][Arm64]: Build arm64 release for gpu arch 8.9 (
#26698
)
2025-10-13 18:42:12 +00:00
test-amd.yaml
[ci] Adding the test-amd.yaml for test definitions for the AMD backend. (alternative PR) (
#26718
)
2025-10-13 23:10:23 -07:00
test-pipeline.yaml
AOT Compilation for torch.compile (Bundled) (
#24274
)
2025-10-10 19:02:11 -04:00