This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-04-11 16:37:03 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
vllm
/
attention
History
Cyrus Leung
6aeb1dab4a
[Bugfix] Fix incorrect import of CacheConfig (
#24631
)
...
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
2025-09-11 01:48:25 -07:00
..
backends
[Model] New model support for Motif-1-Tiny (
#23414
)
2025-09-10 23:29:40 -07:00
layers
[Bugfix] Fix incorrect import of CacheConfig (
#24631
)
2025-09-11 01:48:25 -07:00
ops
[torch.compile][ROCm][V1] Enable attention output FP8 fusion for V1 attention backends (
#19767
)
2025-09-10 13:59:55 -07:00
utils
[Attention] FlashAttn MLA (
#14258
)
2025-09-04 02:47:59 -07:00
__init__.py
Remove duplicate entry in vllm.attention.__all__ (
#23296
)
2025-08-20 17:14:59 -07:00
layer.py
[Bugfix] Add missing VIT backend dispatch on CPU (
#24623
)
2025-09-10 22:28:41 -07:00
selector.py
[gpt-oss] Enable gpt-oss on ampere (
#22714
)
2025-08-12 03:21:44 -07:00