This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-17 09:26:25 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
vllm
/
entrypoints
History
zifeitong
47db6ec831
[Frontend] Add per-request number of cached token stats (
#10174
)
2024-11-12 16:42:28 +00:00
..
openai
[Frontend] Add per-request number of cached token stats (
#10174
)
2024-11-12 16:42:28 +00:00
__init__.py
Change the name to vLLM (
#150
)
2023-06-17 03:07:40 -07:00
api_server.py
bugfix: fix the bug that stream generate not work (
#2756
)
2024-11-09 10:09:48 +00:00
chat_utils.py
Online video support for VLMs (
#10020
)
2024-11-07 20:25:59 +00:00
launcher.py
[Core][Bugfix][Perf] Introduce
MQLLMEngine
to avoid
asyncio
OH (
#8157
)
2024-09-18 13:56:58 +00:00
llm.py
[V1]
AsyncLLM
Implementation (
#9826
)
2024-11-11 23:05:38 +00:00
logger.py
[Frontend] API support for beam search (
#9087
)
2024-10-05 23:39:03 -07:00