Logo
Explore Help
Sign In
xinyun/vllm
1
0
Fork 0
You've already forked vllm
mirror of https://git.datalinker.icu/vllm-project/vllm.git synced 2026-04-17 11:27:02 +08:00
Code Issues Packages Projects Releases Wiki Activity
vllm/vllm/model_executor
History
Jani Monoses f2bd246c17
[VLM] Fix paligemma, fuyu and persimmon with transformers 4.45 : use config.text_config.vocab_size (#8707)
2024-09-23 14:43:09 +00:00
..
guided_decoding
Revert "[Misc][Bugfix] Disable guided decoding for mistral tokenizer" (#8593)
2024-09-19 04:14:28 +00:00
layers
[SpecDec][Misc] Cleanup, remove bonus token logic. (#8701)
2024-09-22 12:34:14 -07:00
model_loader
[CI/Build] Update Ruff version (#8469)
2024-09-18 11:00:56 +00:00
models
[VLM] Fix paligemma, fuyu and persimmon with transformers 4.45 : use config.text_config.vocab_size (#8707)
2024-09-23 14:43:09 +00:00
__init__.py
[Performance] Optimize e2e overheads: Reduce python allocations (#7162)
2024-08-08 21:34:28 -07:00
custom_op.py
[torch.compile] add a flag to disable custom op (#8488)
2024-09-14 13:07:16 -07:00
parameter.py
[Misc] Update GPTQ to use vLLMParameters (#7976)
2024-09-03 17:21:44 -04:00
pooling_metadata.py
[Model][Misc] Add e5-mistral-7b-instruct and Embedding API (#3734)
2024-05-11 11:30:37 -07:00
sampling_metadata.py
[refactor] remove triton based sampler (#8524)
2024-09-16 20:04:48 -07:00
utils.py
[CI/Build] Avoid CUDA initialization (#8534)
2024-09-18 10:38:11 +00:00
Powered by Gitea Version: 1.23.1 Page: 565ms Template: 7ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API