This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-30 06:11:49 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
vllm
/
entrypoints
History
Avinash Raj
f790ad3c50
[Frontend][OpenAI] Support for returning max_model_len on /v1/models response (
#4643
)
2024-06-02 08:06:13 +00:00
..
openai
[Frontend][OpenAI] Support for returning max_model_len on /v1/models response (
#4643
)
2024-06-02 08:06:13 +00:00
__init__.py
Change the name to vLLM (
#150
)
2023-06-17 03:07:40 -07:00
api_server.py
[Frontend] Add --log-level option to api server (
#4377
)
2024-04-26 05:36:01 +00:00
llm.py
[BugFix] Prevent
LLM.encode
for non-generation Models (
#5184
)
2024-06-01 23:35:41 +00:00