Logo
Explore Help
Sign In
xinyun/vllm
1
0
Fork 0
You've already forked vllm
mirror of https://git.datalinker.icu/vllm-project/vllm.git synced 2026-01-01 07:28:41 +08:00
Code Issues Packages Projects Releases Wiki Activity
vllm/tests/models/decoder_only/vision_language
History
Jee Jee Li 32c9eff2ff
[Bugfix][V1] Fix molmo text-only inputs (#11676)
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
2025-01-06 15:22:25 +00:00
..
processing
[Bugfix] Fix precision error in LLaVA-NeXT (#11735)
2025-01-04 23:45:57 +08:00
vlm_utils
[Bugfix][V1] Fix molmo text-only inputs (#11676)
2025-01-06 15:22:25 +00:00
__init__.py
[CI/Build] Reorganize models tests (#7820)
2024-09-13 10:20:06 -07:00
test_awq.py
[Misc] Move some multimodal utils to modality-specific modules (#11494)
2024-12-26 04:23:20 +00:00
test_h2ovl.py
[Misc] Move some multimodal utils to modality-specific modules (#11494)
2024-12-26 04:23:20 +00:00
test_intern_vit.py
[CI/Build] Split up models tests (#10069)
2024-11-09 11:39:14 -08:00
test_models.py
[Bugfix][V1] Fix molmo text-only inputs (#11676)
2025-01-06 15:22:25 +00:00
test_phi3v.py
[Misc] Move some multimodal utils to modality-specific modules (#11494)
2024-12-26 04:23:20 +00:00
test_pixtral.py
[CI/Build] Bump test transformers version (#10106)
2024-12-05 16:05:52 +00:00
test_qwen2_vl.py
[VLM] Merged multi-modal processors for LLaVA-NeXT-Video and LLaVA-OneVision (#11717)
2025-01-04 11:40:53 +00:00
Powered by Gitea Version: 1.23.1 Page: 5735ms Template: 4ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API