This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-04-21 17:27:03 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
vllm
/
transformers_utils
History
Cyrus Leung
66d617e343
[Frontend] Gracefully handle missing chat template and fix CI failure (
#7238
)
...
Co-authored-by: Roger Wang <ywang@roblox.com>
2024-08-07 09:12:05 +00:00
..
configs
[Model] Initialize support for InternVL2 series models (
#6514
)
2024-07-29 10:16:30 +00:00
tokenizer_group
[ Frontend ] Multiprocessing for OpenAI Server with
zeromq
(
#6883
)
2024-08-02 18:27:28 -07:00
tokenizers
[Mypy] Part 3 fix typing for nested directories for most of directory (
#4161
)
2024-04-22 21:32:44 -07:00
__init__.py
[Tokenizer] Add an option to specify tokenizer (
#284
)
2023-06-28 09:46:58 -07:00
config.py
[Core] Support loading GGUF model (
#5191
)
2024-08-05 17:54:23 -06:00
detokenizer.py
[Performance] Optimize
get_seqs
(
#7051
)
2024-08-01 18:29:52 -07:00
image_processor.py
[Core] Dynamic image size support for VLMs (
#5276
)
2024-07-02 20:34:00 -07:00
tokenizer.py
[Frontend] Gracefully handle missing chat template and fix CI failure (
#7238
)
2024-08-07 09:12:05 +00:00