Logo
Explore Help
Sign In
xinyun/vllm
1
0
Fork 0
You've already forked vllm
mirror of https://git.datalinker.icu/vllm-project/vllm.git synced 2025-12-24 00:15:01 +08:00
Code Issues Packages Projects Releases Wiki Activity
vllm/vllm/engine
History
tomeras91 ac04a97a9f
[Frontend] Add max_tokens prometheus metric (#9881)
Signed-off-by: Tomer Asida <tomera@ai21.com>
2024-11-04 22:53:24 +00:00
..
multiprocessing
[Bugfix] Fix MQLLMEngine hanging (#9973)
2024-11-04 16:01:43 -05:00
output_processor
[core] simplify seq group code (#9569)
2024-10-24 00:16:44 -07:00
__init__.py
Change the name to vLLM (#150)
2023-06-17 03:07:40 -07:00
arg_utils.py
[Frontend] Multi-Modality Support for Loading Local Image Files (#9915)
2024-11-04 15:34:57 +00:00
async_llm_engine.py
[2/N] executor pass the complete config to worker/modelrunner (#9938)
2024-11-02 07:35:05 -07:00
async_timeout.py
[Bugfix] AsyncLLMEngine hangs with asyncio.run (#5654)
2024-06-19 13:57:12 -07:00
llm_engine.py
[Frontend] Add max_tokens prometheus metric (#9881)
2024-11-04 22:53:24 +00:00
metrics_types.py
[Frontend] Add max_tokens prometheus metric (#9881)
2024-11-04 22:53:24 +00:00
metrics.py
[Frontend] Add max_tokens prometheus metric (#9881)
2024-11-04 22:53:24 +00:00
protocol.py
[Frontend] re-enable multi-modality input in the new beam search implementation (#9427)
2024-10-29 11:49:47 +00:00
Powered by Gitea Version: 1.23.1 Page: 592ms Template: 3ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API