This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-29 08:00:54 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
vllm
/
entrypoints
History
Robin
fc66dee76d
[Misc] Fix the error in the tip for the --lora-modules parameter (
#12319
)
...
Signed-off-by: wangerxiao <863579016@qq.com>
2025-01-22 16:48:41 +00:00
..
openai
[Misc] Fix the error in the tip for the --lora-modules parameter (
#12319
)
2025-01-22 16:48:41 +00:00
__init__.py
Change the name to vLLM (
#150
)
2023-06-17 03:07:40 -07:00
api_server.py
[2/N] API Server: Avoid ulimit footgun (
#11530
)
2024-12-26 23:43:05 +00:00
chat_utils.py
[Model] Initialize support for Deepseek-VL2 models (
#11578
)
2025-01-12 00:17:24 -08:00
launcher.py
[Core][Bugfix][Perf] Introduce
MQLLMEngine
to avoid
asyncio
OH (
#8157
)
2024-09-18 13:56:58 +00:00
llm.py
[Core] Support fully transparent sleep mode (
#11743
)
2025-01-22 14:39:32 +08:00
logger.py
[Frontend] API support for beam search (
#9087
)
2024-10-05 23:39:03 -07:00
utils.py
[Bugfix] Fix request cancellation without polling (
#11190
)
2024-12-17 12:26:32 -08:00