This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-23 07:15:34 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
v1
/
attention
History
Harry Mellor
951445a52d
Remove default values from
InitVar
s so that they're not stored (
#29859
)
...
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
2025-12-02 12:16:37 +00:00
..
test_attention_backends_selection.py
…
test_attention_backends.py
[v1] Add real sliding window calculation to FlexAttention direct BlockMask building (
#26015
)
2025-12-01 13:12:51 +00:00
test_attention_splitting.py
…
test_batch_reordering.py
…
test_chunked_local_attention.py
…
test_mla_backends.py
[Attention] Refactor FA
block_size
limitations to hybrid models only (
#29084
)
2025-11-22 06:38:44 -08:00
test_rocm_attention_backends_selection.py
[Attention] Update attention imports (
#29540
)
2025-11-27 11:19:09 -05:00
test_sparse_mla_backends.py
…
utils.py
Remove default values from
InitVar
s so that they're not stored (
#29859
)
2025-12-02 12:16:37 +00:00