This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-10 04:15:01 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
entrypoints
History
Cyrus Leung
e83b7e379c
Revert "[Renderer] Separate out
RendererConfig
from
ModelConfig
(
#30145
)" (
#30199
)
2025-12-07 00:00:22 -08:00
..
llm
Fix(llm): Abort orphaned requests when llm.chat() batch fails
Fixes
#26081
(
#27420
)
2025-11-02 16:24:01 +00:00
offline_mode
[Frontend] Perform offline path replacement to
tokenizer
(
#29706
)
2025-11-28 18:32:08 -08:00
openai
Revert "[Renderer] Separate out
RendererConfig
from
ModelConfig
(
#30145
)" (
#30199
)
2025-12-07 00:00:22 -08:00
pooling
[Model][6/N] Improve all pooling task | Support chunked prefill with ALL pooling (
#27145
)
2025-12-04 13:44:15 +00:00
sagemaker
[Misc] Update conftest for entrypoints/sagemaker test folder (
#29799
)
2025-12-01 18:56:39 -09:00
__init__.py
…
conftest.py
[LoRA] Cleanup LoRA unused code (
#29611
)
2025-11-28 22:52:58 -08:00
test_api_server_process_manager.py
…
test_chat_utils.py
Revert "[Renderer] Separate out
RendererConfig
from
ModelConfig
(
#30145
)" (
#30199
)
2025-12-07 00:00:22 -08:00
test_context.py
…
test_renderer.py
…
test_responses_utils.py
[responsesAPI][4] fix responseOutputItem Kimi K2 thinking bug (
#29555
)
2025-12-02 02:11:35 +00:00
test_ssl_cert_refresher.py
…