This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-01-22 05:54:27 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
csrc
/
attention
History
Tao He
60f7624334
Implements dual-chunk-flash-attn backend for dual chunk attention with sparse attention support (
#11844
)
2025-05-12 19:52:47 -07:00
..
mla
[NVIDIA] Support Cutlass MLA for Blackwell GPUs (
#16032
)
2025-04-27 06:29:21 -07:00
attention_dtypes.h
…
attention_generic.cuh
…
attention_kernels.cuh
[FP8][Kernel] Dynamic kv cache scaling factors computation (
#11906
)
2025-01-23 18:04:03 +00:00
attention_utils.cuh
…
dtype_bfloat16.cuh
…
dtype_float16.cuh
…
dtype_float32.cuh
…
dtype_fp8.cuh
…
merge_attn_states.cu
[Bugfix][Kernel] fix potential cuda graph broken for merge_attn_states kernel (
#16693
)
2025-04-16 03:31:39 -07:00
paged_attention_v1.cu
[FP8][Kernel] Dynamic kv cache scaling factors computation (
#11906
)
2025-01-23 18:04:03 +00:00
paged_attention_v2.cu
[FP8][Kernel] Dynamic kv cache scaling factors computation (
#11906
)
2025-01-23 18:04:03 +00:00
vertical_slash_index.cu
Implements dual-chunk-flash-attn backend for dual chunk attention with sparse attention support (
#11844
)
2025-05-12 19:52:47 -07:00