This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-17 11:35:49 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
vllm
/
model_executor
History
Shanshan Shen
d44e9df7d4
[Model][Mamba] Add selector for mamba attention backend and make it pluggable for other device (
#26487
)
...
Signed-off-by: shen-shanshan <467638484@qq.com>
2025-11-19 16:24:55 +00:00
..
layers
[Model][Mamba] Add selector for mamba attention backend and make it pluggable for other device (
#26487
)
2025-11-19 16:24:55 +00:00
model_loader
Move online quantization to
model.load_weights
(
#26327
)
2025-11-18 16:52:41 -08:00
models
[Model][Mamba] Add selector for mamba attention backend and make it pluggable for other device (
#26487
)
2025-11-19 16:24:55 +00:00
warmup
[Core] Encoder separation for Encode-Prefill-Decode Disaggregation (
#25233
)
2025-11-11 18:58:33 -08:00
__init__.py
…
custom_op.py
…
parameter.py
…
utils.py
…