This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-10 01:55:01 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
v1
/
attention
History
Matthew Bonanni
7ba32aa60b
[Attention][FlashInfer] Enable FP8 FlashInfer (TRTLLM) MLA decode (
#24705
)
...
Signed-off-by: Matthew Bonanni <mbonanni001@gmail.com>
2025-09-12 15:45:53 -06:00
..
test_attention_backends_selection.py
[Attention] Unify mamba and attention backend selection (
#23171
)
2025-08-25 09:09:36 +00:00
test_attention_backends.py
[Attention][FlashInfer] Enable FP8 FlashInfer (TRTLLM) MLA decode (
#24705
)
2025-09-12 15:45:53 -06:00
test_attention_splitting.py
[Attention][DBO] Add support for "splitting" the CommonAttentionMetadata (
#21153
)
2025-08-01 19:47:53 -07:00
test_chunked_local_attention.py
fix some typos (
#24071
)
2025-09-02 20:44:50 -07:00
test_mla_backends.py
[Attention] FlashAttention MLA cudagraph support (
#23958
)
2025-09-08 22:05:26 +00:00
utils.py
[Attention] FlashAttn MLA (
#14258
)
2025-09-04 02:47:59 -07:00