This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-15 09:05:01 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
v1
/
attention
History
Matthew Bonanni
fc1d8be3dc
[Attention] Update attention imports (
#29540
)
...
Signed-off-by: Matthew Bonanni <mbonanni@redhat.com>
2025-11-27 11:19:09 -05:00
..
test_attention_backends_selection.py
…
test_attention_backends.py
[Attention] Refactor CUDA attention backend selection logic (
#24794
)
2025-11-11 07:40:44 -05:00
test_attention_splitting.py
…
test_batch_reordering.py
[BugFix] Reordering extend logic fix (
#27739
)
2025-10-29 21:39:34 -07:00
test_chunked_local_attention.py
…
test_mla_backends.py
[Attention] Refactor FA
block_size
limitations to hybrid models only (
#29084
)
2025-11-22 06:38:44 -08:00
test_rocm_attention_backends_selection.py
[Attention] Update attention imports (
#29540
)
2025-11-27 11:19:09 -05:00
test_sparse_mla_backends.py
Add TP parameter to attention tests (
#27683
)
2025-11-03 13:04:40 -08:00
utils.py
[ROCm][CI] Fix test_cudagraph_mode failure in AMD CI (
#29367
)
2025-11-25 07:55:09 +00:00