Logo
Explore Help
Sign In
xinyun/vllm
1
0
Fork 0
You've already forked vllm
mirror of https://git.datalinker.icu/vllm-project/vllm.git synced 2025-12-23 10:45:01 +08:00
Code Issues Packages Projects Releases Wiki Activity
vllm/tests/lora
History
Or Sharir ae0ccb4017
Add missing kernel for CodeLlama-34B on A/H100 (no tensor parallelism) when using Multi-LoRA. (#3350)
2024-03-13 12:18:25 -07:00
..
__init__.py
[Experimental] Add multi-LoRA support (#1804)
2024-01-23 15:26:37 -08:00
conftest.py
Add distributed model executor abstraction (#3191)
2024-03-11 11:03:45 -07:00
test_gemma.py
Add LoRA support for Gemma (#3050)
2024-02-28 13:03:28 -08:00
test_layer_variation.py
Re-enable the 80 char line width limit (#3305)
2024-03-10 19:49:14 -07:00
test_layers.py
Re-enable the 80 char line width limit (#3305)
2024-03-10 19:49:14 -07:00
test_llama.py
Re-enable the 80 char line width limit (#3305)
2024-03-10 19:49:14 -07:00
test_lora_manager.py
Add LoRA support for Mixtral (#2831)
2024-02-14 00:55:45 +01:00
test_lora.py
[Experimental] Add multi-LoRA support (#1804)
2024-01-23 15:26:37 -08:00
test_mixtral.py
Re-enable the 80 char line width limit (#3305)
2024-03-10 19:49:14 -07:00
test_punica.py
Add missing kernel for CodeLlama-34B on A/H100 (no tensor parallelism) when using Multi-LoRA. (#3350)
2024-03-13 12:18:25 -07:00
test_tokenizer.py
[Experimental] Add multi-LoRA support (#1804)
2024-01-23 15:26:37 -08:00
test_utils.py
[Experimental] Add multi-LoRA support (#1804)
2024-01-23 15:26:37 -08:00
test_worker.py
Remove hardcoded device="cuda" to support more devices (#2503)
2024-02-01 15:46:39 -08:00
utils.py
[Experimental] Add multi-LoRA support (#1804)
2024-01-23 15:26:37 -08:00
Powered by Gitea Version: 1.23.1 Page: 360ms Template: 4ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API