This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-01-17 05:24:27 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
vllm
/
attention
History
Hosang
dd5fa7e04f
[ROCm][Kernel][V1] Enable AMD Radeon GPU Custom Paged Attention on v1 (
#17004
)
...
Signed-off-by: Hosang Yoon <hosang.yoon@amd.com>
2025-05-21 08:35:00 -07:00
..
backends
[ROCm][Kernel][V1] Enable AMD Radeon GPU Custom Paged Attention on v1 (
#17004
)
2025-05-21 08:35:00 -07:00
ops
[ROCm][Kernel][V1] Enable AMD Radeon GPU Custom Paged Attention on v1 (
#17004
)
2025-05-21 08:35:00 -07:00
utils
[BugFix] Fix vllm_flash_attn install issues (
#17267
)
2025-04-27 17:27:56 -07:00
__init__.py
[Attention] Flash Attention 3 - fp8 (
#14570
)
2025-03-20 01:14:20 -04:00
layer.py
[v1] AttentionMetadata for each layer (
#17394
)
2025-05-06 07:58:37 -07:00
selector.py
Correct capitalisation:
VLLM
->
vLLM
(
#14562
)
2025-03-10 16:36:21 +00:00