This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-10 03:05:02 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
models
/
multimodal
History
Aritra Roy Gosthipaty
2226d5bd85
[Bugfix] Decode Tokenized IDs to Strings for
hf_processor
in
llm.chat()
with
model_impl=transformers
(
#21353
)
...
Signed-off-by: ariG23498 <aritra.born2fly@gmail.com>
2025-07-22 08:27:28 -07:00
..
generation
[feat] Enable mm caching for transformers backend (
#21358
)
2025-07-22 08:18:46 -07:00
pooling
[Misc] unify variable for LLM instance (
#20996
)
2025-07-21 12:18:33 +01:00
processing
[Bugfix] Decode Tokenized IDs to Strings for
hf_processor
in
llm.chat()
with
model_impl=transformers
(
#21353
)
2025-07-22 08:27:28 -07:00
__init__.py
[CI/Build] Move model-specific multi-modal processing tests (
#11934
)
2025-01-11 13:50:05 +08:00
test_mapping.py
[Bugfix] Update multimodel models mapping to fit new checkpoint after Transformers v4.52 (
#19151
)
2025-06-17 15:58:38 +00:00