Logo
Explore Help
Sign In
xinyun/vllm
1
0
Fork 0
You've already forked vllm
mirror of https://git.datalinker.icu/vllm-project/vllm.git synced 2026-03-20 09:23:31 +08:00
Code Issues Packages Projects Releases Wiki Activity
vllm/vllm/transformers_utils
History
Cyrus Leung eec906d811
[Misc] Add placeholder module (#11501)
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
2024-12-26 13:12:51 +00:00
..
configs
[Model] Support telechat2 (#10311)
2024-11-27 11:32:35 +00:00
tokenizer_group
[Misc] Clean up and consolidate LRUCache (#11339)
2024-12-20 00:59:32 +08:00
tokenizers
[Bugfix] Fix guided decoding with tokenizer mode mistral (#11046)
2024-12-17 22:34:08 -08:00
__init__.py
Fix the log to correct guide user to install modelscope (#9793)
2024-10-29 10:36:59 -07:00
config.py
[BUG] Remove token param #10921 (#11022)
2024-12-10 17:38:15 +00:00
detokenizer_utils.py
[V1] Implement vLLM V1 [1/N] (#9289)
2024-10-22 01:24:07 -07:00
detokenizer.py
[Bugfix] fix detokenizer shallow copy (#5919)
2024-10-22 15:38:12 -07:00
processor.py
[Model] Support Pixtral models in the HF Transformers format (#9036)
2024-10-18 13:29:56 -06:00
s3_utils.py
[Misc] Add placeholder module (#11501)
2024-12-26 13:12:51 +00:00
tokenizer.py
[Bugfix] Fix guided decoding with tokenizer mode mistral (#11046)
2024-12-17 22:34:08 -08:00
utils.py
[Core] Loading model from S3 using RunAI Model Streamer as optional loader (#10192)
2024-12-20 16:46:24 +00:00
Powered by Gitea Version: 1.23.1 Page: 479ms Template: 7ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API