Logo
Explore Help
Sign In
xinyun/vllm
1
0
Fork 0
You've already forked vllm
mirror of https://git.datalinker.icu/vllm-project/vllm.git synced 2025-12-20 03:35:01 +08:00
Code Issues Packages Projects Releases Wiki Activity
vllm/tests/models
History
wang.yuqi 3e9ce609bd
[Bugfix] Fix nomic max_model_len (#18755)
2025-05-27 20:29:53 -07:00
..
fixtures
[Mistral-Small 3.1] Update docs and tests (#14977)
2025-03-18 03:29:42 -07:00
language
[Bugfix] Fix nomic max_model_len (#18755)
2025-05-27 20:29:53 -07:00
multimodal
[Bugfix] Fix profiling dummy data for Pixtral (#18677)
2025-05-25 14:05:30 +00:00
quantization
[V1][Quantization] Add CUDA graph compatible v1 GGUF support (#18646)
2025-05-27 04:40:28 +00:00
__init__.py
[CI/Build] Move test_utils.py to tests/utils.py (#4425)
2024-05-13 23:50:09 +09:00
registry.py
[Bugfix] Fix profiling dummy data for Pixtral (#18677)
2025-05-25 14:05:30 +00:00
test_initialization.py
[CI] Enable test_initialization to run on V1 (#16736)
2025-05-23 15:09:44 -07:00
test_oot_registration.py
Re-submit: Fix: Proper RGBA -> RGB conversion for PIL images. (#18569)
2025-05-23 01:59:18 +00:00
test_registry.py
[Bugfix][ROCm] running new process using spawn method for rocm in tests. (#14810)
2025-03-17 11:33:35 +00:00
test_transformers.py
Enable hybrid attention models for Transformers backend (#18494)
2025-05-23 10:12:08 +08:00
test_utils.py
[Misc] Allow AutoWeightsLoader to skip loading weights with specific substr in name (#18358)
2025-05-19 20:20:12 -07:00
test_vision.py
[Bugfix] Fix Positive Feature Layers in Llava Models (#13514)
2025-02-19 08:50:07 +00:00
utils.py
[New Model]: nomic-embed-text-v2-moe (#17785)
2025-05-11 00:59:43 -07:00
Powered by Gitea Version: 1.23.1 Page: 643ms Template: 4ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API