This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-03-24 18:56:53 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
kernels
/
attention
History
Cyrus Leung
9fb52e523a
[V1] Support any head size for FlexAttention backend (
#20467
)
...
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
2025-07-06 09:54:36 -07:00
..
conftest.py
…
test_attention_selector.py
[V1] Support any head size for FlexAttention backend (
#20467
)
2025-07-06 09:54:36 -07:00
test_attention.py
test_attention compat with coming xformers change (
#20487
)
2025-07-05 19:37:59 -07:00
test_blocksparse_attention.py
…
test_cache.py
[CI] change spell checker from codespell to typos (
#18711
)
2025-06-11 19:57:10 -07:00
test_cascade_flash_attn.py
…
test_encoder_decoder_attn.py
[CI] change spell checker from codespell to typos (
#18711
)
2025-06-11 19:57:10 -07:00
test_flash_attn.py
…
test_flashinfer.py
…
test_flashmla.py
…
test_lightning_attn.py
…
test_merge_attn_states.py
…
test_mha_attn.py
…
test_mla_decode_cpu.py
[Refactor] Remove duplicate
ceil_div
(
#20023
)
2025-06-25 05:19:09 +00:00
test_prefix_prefill.py
…
test_rocm_attention_selector.py
[BugFix][V1][ROCm] Triton MLA uses V0 backend on V1 engine (
#19067
)
2025-07-01 16:12:19 +08:00
test_triton_decode_attention.py
[Refactor] Remove duplicate
ceil_div
(
#20023
)
2025-06-25 05:19:09 +00:00
test_triton_unified_attention.py
…