This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-10 05:04:58 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
v1
/
attention
History
vllmellm
e48b2e6848
[Bugfix] [ROCm] [UX] Reorganize ROCm Backend Selection Logic (
#26980
)
...
Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>
2025-11-24 15:24:49 +00:00
..
test_attention_backends_selection.py
…
test_attention_backends.py
[Attention] Refactor CUDA attention backend selection logic (
#24794
)
2025-11-11 07:40:44 -05:00
test_attention_splitting.py
…
test_batch_reordering.py
…
test_chunked_local_attention.py
…
test_mla_backends.py
[Attention] Refactor FA
block_size
limitations to hybrid models only (
#29084
)
2025-11-22 06:38:44 -08:00
test_rocm_attention_backends_selection.py
[Bugfix] [ROCm] [UX] Reorganize ROCm Backend Selection Logic (
#26980
)
2025-11-24 15:24:49 +00:00
test_sparse_mla_backends.py
…
utils.py
[Attention] Refactor CUDA attention backend selection logic (
#24794
)
2025-11-11 07:40:44 -05:00