This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-01-10 13:14:33 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
vllm
/
entrypoints
History
Cyrus Leung
ba0d892074
[Frontend] Use a proper chat template for VLM2Vec (
#9912
)
2024-11-01 14:09:07 +00:00
..
openai
[Frontend] Chat-based Embeddings API (
#9759
)
2024-11-01 08:13:35 +00:00
__init__.py
Change the name to vLLM (
#150
)
2023-06-17 03:07:40 -07:00
api_server.py
[Bugfix] Config got an unexpected keyword argument 'engine' (
#8556
)
2024-09-20 14:00:45 -07:00
chat_utils.py
[Frontend] Use a proper chat template for VLM2Vec (
#9912
)
2024-11-01 14:09:07 +00:00
launcher.py
[Core][Bugfix][Perf] Introduce
MQLLMEngine
to avoid
asyncio
OH (
#8157
)
2024-09-18 13:56:58 +00:00
llm.py
[Misc] Remove deprecated arg for cuda graph capture (
#9864
)
2024-10-31 07:22:07 +00:00
logger.py
[Frontend] API support for beam search (
#9087
)
2024-10-05 23:39:03 -07:00