This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-01-25 06:04:29 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
vllm
/
model_executor
/
model_loader
History
Cyrus Leung
ae66818379
[Misc] Fix pre-commit (
#29238
)
...
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
2025-11-22 06:48:01 -08:00
..
__init__.py
Default model load/config/tokenizer to
mistral
format if relevant files exist (
#28659
)
2025-11-21 13:58:59 -08:00
base_loader.py
…
bitsandbytes_loader.py
…
default_loader.py
Default model load/config/tokenizer to
mistral
format if relevant files exist (
#28659
)
2025-11-21 13:58:59 -08:00
dummy_loader.py
…
gguf_loader.py
[Model] Add Gemma3 GGUF multimodal support (
#27772
)
2025-11-18 08:56:29 -08:00
online_quantization.py
Move online quantization to
model.load_weights
(
#26327
)
2025-11-18 16:52:41 -08:00
runai_streamer_loader.py
…
sharded_state_loader.py
[log] add weights loading time log to sharded_state loader (
#28628
)
2025-11-21 21:06:09 +00:00
tensorizer_loader.py
…
tensorizer.py
…
tpu.py
…
utils.py
[Misc] Fix pre-commit (
#29238
)
2025-11-22 06:48:01 -08:00
weight_utils.py
[torchao] fix safetensors for sharding (
#28169
)
2025-11-19 16:39:45 -08:00