This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-09 21:35:01 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
csrc
/
attention
History
SangBin Cho
3521ba4f25
[Core][Model runner refactoring 1/N] Refactor attn metadata term (
#4518
)
2024-05-03 10:20:12 -07:00
..
attention_dtypes.h
Enable scaled FP8 (e4m3fn) KV cache on ROCm (AMD GPU) (
#3290
)
2024-04-03 14:15:55 -07:00
attention_generic.cuh
Change the name to vLLM (
#150
)
2023-06-17 03:07:40 -07:00
attention_kernels.cu
[Core][Model runner refactoring 1/N] Refactor attn metadata term (
#4518
)
2024-05-03 10:20:12 -07:00
attention_utils.cuh
Merge EmbeddedLLM/vllm-rocm into vLLM main (
#1836
)
2023-12-07 23:16:52 -08:00
dtype_bfloat16.cuh
Merge EmbeddedLLM/vllm-rocm into vLLM main (
#1836
)
2023-12-07 23:16:52 -08:00
dtype_float16.cuh
Merge EmbeddedLLM/vllm-rocm into vLLM main (
#1836
)
2023-12-07 23:16:52 -08:00
dtype_float32.cuh
[BugFix] Fix NaN errors in paged attention kernel (
#936
)
2023-09-04 09:20:06 +09:00
dtype_fp8.cuh
Enable scaled FP8 (e4m3fn) KV cache on ROCm (AMD GPU) (
#3290
)
2024-04-03 14:15:55 -07:00