This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-01-11 20:44:38 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
v1
/
engine
History
Nick Hill
ad0297d113
[Misc] Support passing multiple request ids at once to
AsyncLLM.abort()
(
#22944
)
...
Signed-off-by: Nick Hill <nhill@redhat.com>
2025-08-15 17:00:36 -07:00
..
__init__.py
…
conftest.py
…
test_async_llm.py
[Misc] Support passing multiple request ids at once to
AsyncLLM.abort()
(
#22944
)
2025-08-15 17:00:36 -07:00
test_engine_args.py
…
test_engine_core_client.py
[BugFix] Handle case where async utility call is cancelled (
#22996
)
2025-08-15 17:38:42 -06:00
test_engine_core.py
[Core] Use individual MM items in P0/P1 cache and model runner (
#22570
)
2025-08-13 07:18:07 -07:00
test_fast_incdec_prefix_err.py
…
test_llm_engine.py
[Bugfix] fix when skip tokenizer init (
#21922
)
2025-08-01 10:09:36 -07:00
test_output_processor.py
[Core] Use individual MM items in P0/P1 cache and model runner (
#22570
)
2025-08-13 07:18:07 -07:00
utils.py
…