This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-16 11:35:50 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
entrypoints
History
Isotr0py
896e41ae04
[CI/Build] Replace wikipedia url with local server ones (
#28908
)
...
Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
2025-11-18 08:10:55 +00:00
..
llm
Fix(llm): Abort orphaned requests when llm.chat() batch fails
Fixes
#26081
(
#27420
)
2025-11-02 16:24:01 +00:00
offline_mode
…
openai
[CI/Build] Replace wikipedia url with local server ones (
#28908
)
2025-11-18 08:10:55 +00:00
pooling
[Frontend] Added chat-style multimodal support to /classify. (
#27516
)
2025-11-14 11:03:55 +00:00
sagemaker
[Frontend] Add sagemaker_standards dynamic lora adapter and stateful session management decorators to vLLM OpenAI API server (
#27892
)
2025-11-11 04:57:01 +00:00
__init__.py
…
conftest.py
…
test_api_server_process_manager.py
…
test_chat_utils.py
fix: allow HuggingFace standard chat template params via **kwargs (
#27622
)
2025-10-28 21:12:34 +08:00
test_context.py
…
test_harmony_utils.py
[Frontend] [gpt-oss] Mcp type bug (
#27689
)
2025-10-29 10:01:32 +00:00
test_renderer.py
[Frontend] Require flag for loading text and image embeds (
#27204
)
2025-10-22 15:52:02 +00:00
test_responses_utils.py
[Frontend][responsesAPI][1/n] convert responses API tool input to chat completions tool format (
#28231
)
2025-11-13 04:47:22 +00:00
test_ssl_cert_refresher.py
…