Logo
Explore Help
Sign In
xinyun/vllm
1
0
Fork 0
You've already forked vllm
mirror of https://git.datalinker.icu/vllm-project/vllm.git synced 2026-01-01 21:31:59 +08:00
Code Issues Packages Projects Releases Wiki Activity
vllm/vllm/entrypoints
History
Cyrus Leung 042af0c8d3
[Model][1/N] Support multiple poolers at model level (#21227)
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
2025-07-21 02:22:21 -07:00
..
cli
Add full serve CLI reference back to docs (#20978)
2025-07-15 17:42:30 +00:00
openai
[Model][1/N] Support multiple poolers at model level (#21227)
2025-07-21 02:22:21 -07:00
__init__.py
Change the name to vLLM (#150)
2023-06-17 03:07:40 -07:00
api_server.py
[Frontend] Make TIMEOUT_KEEP_ALIVE configurable through env var (#18472)
2025-06-09 21:41:21 +00:00
chat_utils.py
[Frontend] OpenAI Responses API supports input image (#20975)
2025-07-15 18:59:36 -06:00
launcher.py
[Misc] Add SPDX-FileCopyrightText (#19100)
2025-06-03 11:20:17 -07:00
llm.py
[Core] Set pooling params based on task and model (#21128)
2025-07-18 05:41:17 -07:00
logger.py
[Misc] Add SPDX-FileCopyrightText (#19100)
2025-06-03 11:20:17 -07:00
score_utils.py
[Docs] Lazy import gguf (#20785)
2025-07-10 16:06:37 -07:00
ssl.py
[Misc] Add SPDX-FileCopyrightText (#19100)
2025-06-03 11:20:17 -07:00
utils.py
[frontend] Add --help=page option for paginated help output (#20961)
2025-07-15 00:42:00 -07:00
Powered by Gitea Version: 1.23.1 Page: 137ms Template: 6ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API