This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-09 22:44:54 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
models
/
multimodal
History
Raushan Turganbay
f38ee34a0a
[feat] Enable mm caching for transformers backend (
#21358
)
...
Signed-off-by: raushan <raushan@huggingface.co>
2025-07-22 08:18:46 -07:00
..
generation
[feat] Enable mm caching for transformers backend (
#21358
)
2025-07-22 08:18:46 -07:00
pooling
[Misc] unify variable for LLM instance (
#20996
)
2025-07-21 12:18:33 +01:00
processing
[VLM] Add Nemotron-Nano-VL-8B-V1 support (
#20349
)
2025-07-17 03:07:55 -07:00
__init__.py
[CI/Build] Move model-specific multi-modal processing tests (
#11934
)
2025-01-11 13:50:05 +08:00
test_mapping.py
[Bugfix] Update multimodel models mapping to fit new checkpoint after Transformers v4.52 (
#19151
)
2025-06-17 15:58:38 +00:00