This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-30 10:38:45 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
vllm
/
model_executor
/
model_loader
History
dengyunyang
8f8f469b1b
[BugFix] skip language model in Encoder (
#30242
)
...
Signed-off-by: dengyunyang <584797741@qq.com>
2025-12-22 05:25:59 -08:00
..
__init__.py
Default model load/config/tokenizer to
mistral
format if relevant files exist (
#28659
)
2025-11-21 13:58:59 -08:00
base_loader.py
…
bitsandbytes_loader.py
…
default_loader.py
[Chore]: Reorganize model repo operating functions in
transformers_utils
(
#29680
)
2025-11-28 08:46:51 -08:00
dummy_loader.py
…
gguf_loader.py
[Model][Quantization] Restore MoE + GGUF models support (incl. Qwen3 MoE) by allowing Sideload Parameters (
#30116
)
2025-12-09 05:30:05 +00:00
online_quantization.py
Move online quantization to
model.load_weights
(
#26327
)
2025-11-18 16:52:41 -08:00
runai_streamer_loader.py
…
sharded_state_loader.py
[log] add weights loading time log to sharded_state loader (
#28628
)
2025-11-21 21:06:09 +00:00
tensorizer_loader.py
…
tensorizer.py
…
tpu.py
…
utils.py
[BugFix] skip language model in Encoder (
#30242
)
2025-12-22 05:25:59 -08:00
weight_utils.py
Filter safetensors files to download if .safetensors.index.json exists (
#30537
)
2025-12-18 14:51:17 +00:00