This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-01-18 09:14:27 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
vllm
/
transformers_utils
History
Roger Wang
1bedf210e3
Bump
transformers
version for Llama 3.1 hotfix and patch Chameleon (
#6690
)
2024-07-23 13:47:48 -07:00
..
configs
Bump
transformers
version for Llama 3.1 hotfix and patch Chameleon (
#6690
)
2024-07-23 13:47:48 -07:00
tokenizer_group
[Core] Allow specifying custom Executor (
#6557
)
2024-07-20 01:25:06 +00:00
tokenizers
[Mypy] Part 3 fix typing for nested directories for most of directory (
#4161
)
2024-04-22 21:32:44 -07:00
__init__.py
[Tokenizer] Add an option to specify tokenizer (
#284
)
2023-06-28 09:46:58 -07:00
config.py
Bump
transformers
version for Llama 3.1 hotfix and patch Chameleon (
#6690
)
2024-07-23 13:47:48 -07:00
detokenizer.py
[BugFix][Frontend] Use LoRA tokenizer in OpenAI APIs (
#6227
)
2024-07-18 15:13:30 +08:00
image_processor.py
[Core] Dynamic image size support for VLMs (
#5276
)
2024-07-02 20:34:00 -07:00
tokenizer.py
[Core] Support dynamically loading Lora adapter from HuggingFace (
#6234
)
2024-07-22 15:42:40 -07:00