This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-25 14:34:02 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
v1
/
engine
History
Nick Hill
45c0526ac9
[BugFix] Handle errors when preprocessing added requests (
#30895
)
...
Signed-off-by: Nick Hill <nhill@redhat.com>
2025-12-19 01:29:11 +00:00
..
__init__.py
…
conftest.py
…
test_abort_final_step.py
[BugFix] Eagerly abort cancelled final-step requests (
#29987
)
2025-12-05 17:28:32 +00:00
test_async_llm.py
[Bugfix] fix DP-aware routing in OpenAI API requests (
#29002
)
2025-12-18 09:50:42 -08:00
test_engine_args.py
[Core] Add xxHash as a high-performance hash option for accelerating prefix caching (
#29163
)
2025-12-03 16:06:57 +00:00
test_engine_core_client.py
…
test_engine_core.py
kv_transfer: Rename the shared storage connectors (
#30201
)
2025-12-08 20:46:09 -08:00
test_fast_incdec_prefix_err.py
…
test_init_error_messaging.py
[Bugfix] fix confusing OOM errors during v1 init (
#28051
)
2025-12-10 23:17:41 +00:00
test_llm_engine.py
…
test_output_processor.py
[Misc] Refactor tokenizer interface (
#29693
)
2025-11-29 04:02:21 -08:00
test_parallel_sampling.py
…
test_preprocess_error_handling.py
[BugFix] Handle errors when preprocessing added requests (
#30895
)
2025-12-19 01:29:11 +00:00
test_process_multi_modal_uuids.py
Revert "[Renderer] Separate out
RendererConfig
from
ModelConfig
(
#30145
)" (
#30199
)
2025-12-07 00:00:22 -08:00
utils.py
…