This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-01-19 15:34:33 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
vllm
/
entrypoints
History
Noam Gat
11fcf0e066
Remove token-adding chat embedding params (
#10551
)
...
Signed-off-by: Noam Gat <noamgat@gmail.com>
2024-11-21 23:59:47 -08:00
..
openai
Remove token-adding chat embedding params (
#10551
)
2024-11-21 23:59:47 -08:00
__init__.py
Change the name to vLLM (
#150
)
2023-06-17 03:07:40 -07:00
api_server.py
bugfix: fix the bug that stream generate not work (
#2756
)
2024-11-09 10:09:48 +00:00
chat_utils.py
[Frontend] Automatic detection of chat content format from AST (
#9919
)
2024-11-16 13:35:40 +08:00
launcher.py
[Core][Bugfix][Perf] Introduce
MQLLMEngine
to avoid
asyncio
OH (
#8157
)
2024-09-18 13:56:58 +00:00
llm.py
[Minor] Fix line-too-long (
#10563
)
2024-11-21 19:40:40 -08:00
logger.py
[Frontend] API support for beam search (
#9087
)
2024-10-05 23:39:03 -07:00