This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-01-07 22:40:50 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
vllm
/
model_executor
/
layers
/
mamba
History
Matthew Bonanni
1d93f11675
[Attention][CUDAGraph] Remove CG padding from attention backends (
#29352
)
...
Signed-off-by: Matthew Bonanni <mbonanni@redhat.com>
2025-12-02 13:48:08 -05:00
..
ops
…
__init__.py
…
abstract.py
[Attention] Update attention imports (
#29540
)
2025-11-27 11:19:09 -05:00
linear_attn.py
[Attention] Remove imports from
vllm/attention/__init__.py
(
#29342
)
2025-11-26 10:53:15 -07:00
mamba_mixer2.py
[V0 deprecation] Remove more V0 references (
#29088
)
2025-11-21 11:56:59 +00:00
mamba_mixer.py
[Attention][CUDAGraph] Remove CG padding from attention backends (
#29352
)
2025-12-02 13:48:08 -05:00
mamba_utils.py
…
short_conv.py
[Model][Mamba] Add selector for mamba attention backend and make it pluggable for other device (
#26487
)
2025-11-19 16:24:55 +00:00