This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-10 06:15:01 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
entrypoints
/
llm
History
Vensen
0ce743f4e1
Fix(llm): Abort orphaned requests when llm.chat() batch fails
Fixes
#26081
(
#27420
)
...
Signed-off-by: vensenmu <vensenmu@gmail.com>
2025-11-02 16:24:01 +00:00
..
__init__.py
…
test_accuracy.py
…
test_chat.py
Fix(llm): Abort orphaned requests when llm.chat() batch fails
Fixes
#26081
(
#27420
)
2025-11-02 16:24:01 +00:00
test_collective_rpc.py
[CI] Replace large models with tiny alternatives in tests (
#24057
)
2025-10-16 15:51:27 +01:00
test_generate.py
[Bugfix][Frontend] validate arg priority in frontend LLM class before add request (
#27596
)
2025-10-28 14:02:43 +00:00
test_gpu_utilization.py
…
test_mm_cache_stats.py
…
test_prompt_validation.py
[Frontend] Require flag for loading text and image embeds (
#27204
)
2025-10-22 15:52:02 +00:00