This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-13 23:35:34 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
csrc
/
attention
History
Pleaplusone
d9d342d214
[Performance][MLA][ROCm] Remove redundant D2D copy in deepseek (
#27457
)
...
Signed-off-by: ganyi <ygan@amd.com>
2025-11-26 12:45:28 +08:00
..
mla
[Attention] Tune CUTLASS MLA num_splits (
#26846
)
2025-10-16 06:36:09 -07:00
attention_dtypes.h
…
attention_generic.cuh
…
attention_kernels.cuh
[Refactor] Refactor FP8 & INT8 Quant Folder inside
w8a8
(
#25293
)
2025-10-08 10:20:48 -04:00
attention_utils.cuh
[AMD][CI/Build] Disambiguation of the function call for ROCm 6.2 headers compatibility (
#7477
)
2024-08-21 16:47:36 -07:00
dtype_bfloat16.cuh
[CI/Build] Suppress divide-by-zero and missing return statement warnings (
#7001
)
2024-08-05 16:00:01 -04:00
dtype_float16.cuh
…
dtype_float32.cuh
…
dtype_fp8.cuh
…
merge_attn_states.cu
[Performance][MLA][ROCm] Remove redundant D2D copy in deepseek (
#27457
)
2025-11-26 12:45:28 +08:00
paged_attention_v1.cu
[Bugfix][ROCm] Fix for warp_size uses on host (
#21205
)
2025-07-24 00:37:19 -07:00
paged_attention_v2.cu
[Bugfix][ROCm] Fix for warp_size uses on host (
#21205
)
2025-07-24 00:37:19 -07:00
vertical_slash_index.cu
Implements dual-chunk-flash-attn backend for dual chunk attention with sparse attention support (
#11844
)
2025-05-12 19:52:47 -07:00