This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-04-23 13:17:04 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
vllm
/
attention
History
Russell Bryant
f5aa307d77
Remove duplicate entry in vllm.attention.__all__ (
#23296
)
...
Signed-off-by: Russell Bryant <rbryant@redhat.com>
2025-08-20 17:14:59 -07:00
..
backends
[V0 Deprecation] Remove V0 FlashInfer attention backend (
#22776
)
2025-08-18 19:54:16 -07:00
layers
[Docs] Fix warnings in docs build (
#22588
)
2025-08-10 05:49:51 -07:00
ops
Remove unneeded ROCm platform import when using CUDA (
#22765
)
2025-08-12 21:26:38 -07:00
utils
[MISC] Add init files for python package (
#20908
)
2025-07-15 12:16:33 +00:00
__init__.py
Remove duplicate entry in vllm.attention.__all__ (
#23296
)
2025-08-20 17:14:59 -07:00
layer.py
[NVIDIA] Support Flashinfer TRTLLM FP8-q/kv/out Attention Kernel (
#21716
)
2025-08-19 08:22:15 -04:00
selector.py
[gpt-oss] Enable gpt-oss on ampere (
#22714
)
2025-08-12 03:21:44 -07:00