This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-23 01:25:33 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
vllm
/
model_executor
/
layers
/
mamba
History
Cyrus Leung
aab0102a26
[V0 deprecation] Remove more V0 references (
#29088
)
...
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
2025-11-21 11:56:59 +00:00
..
ops
[Hybrid] [Kernel] Fix chunk scan kernel when BLOCK_SIZE_DSTATE > 128 (
#28295
)
2025-11-14 22:55:42 +00:00
__init__.py
…
abstract.py
[Model][Mamba] Add selector for mamba attention backend and make it pluggable for other device (
#26487
)
2025-11-19 16:24:55 +00:00
linear_attn.py
[Model][Mamba] Add selector for mamba attention backend and make it pluggable for other device (
#26487
)
2025-11-19 16:24:55 +00:00
mamba_mixer2.py
[V0 deprecation] Remove more V0 references (
#29088
)
2025-11-21 11:56:59 +00:00
mamba_mixer.py
[Model][Mamba] Add selector for mamba attention backend and make it pluggable for other device (
#26487
)
2025-11-19 16:24:55 +00:00
mamba_utils.py
…
short_conv.py
[Model][Mamba] Add selector for mamba attention backend and make it pluggable for other device (
#26487
)
2025-11-19 16:24:55 +00:00