Logo
Explore Help
Sign In
xinyun/vllm
1
0
Fork 0
You've already forked vllm
mirror of https://git.datalinker.icu/vllm-project/vllm.git synced 2026-04-24 03:57:02 +08:00
Code Issues Packages Projects Releases Wiki Activity
vllm/vllm/v1
History
youkaichao b031a455a9
[torch.compile] add logging for compilation time (#10941)
Signed-off-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
2024-12-06 10:07:15 +00:00
..
attention
[misc] use out argument for flash attention (#10822)
2024-12-02 10:50:10 +00:00
core
[V1] Do not allocate beyond the max_model_len (#10730)
2024-11-28 00:13:15 -08:00
engine
[torch.compile] add logging for compilation time (#10941)
2024-12-06 10:07:15 +00:00
executor
[V1] Fix Configs (#9971)
2024-11-04 00:24:40 +00:00
sample
[V1] Support per-request seed (#9945)
2024-11-03 09:14:17 -08:00
worker
[V1] Fix when max_model_len is not divisible by block_size (#10903)
2024-12-04 16:54:05 -08:00
__init__.py
[V1] AsyncLLM Implementation (#9826)
2024-11-11 23:05:38 +00:00
outputs.py
[V1] Implement vLLM V1 [1/N] (#9289)
2024-10-22 01:24:07 -07:00
request.py
[V1] VLM - Run the mm_mapper preprocessor in the frontend process (#10640)
2024-12-03 10:33:10 +00:00
serial_utils.py
[V1] Use pickle for serializing EngineCoreRequest & Add multimodal inputs to EngineCoreRequest (#10245)
2024-11-12 08:57:14 -08:00
utils.py
[V1] Add all_token_ids attribute to Request (#10135)
2024-11-07 17:08:24 -08:00
Powered by Gitea Version: 1.23.1 Page: 532ms Template: 6ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API