This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-23 03:15:01 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
vllm
/
attention
History
Chen Zhang
cf5f000d21
[torch.compile] Hide KV cache behind torch.compile boundary (
#11677
)
...
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
2025-01-10 13:14:42 +08:00
..
backends
[Misc] Move
print_*_once
from utils to logger (
#11298
)
2025-01-09 12:48:12 +08:00
ops
[Bugfix] Fix chunked prefill with model dtype float32 on Turing Devices (
#9850
)
2024-11-25 12:23:32 -05:00
__init__.py
[Core] Add
AttentionState
abstraction (
#7663
)
2024-08-20 18:50:45 +00:00
layer.py
[torch.compile] Hide KV cache behind torch.compile boundary (
#11677
)
2025-01-10 13:14:42 +08:00
selector.py
[platform] Allow platform specify attention backend (
#11609
)
2025-01-09 21:46:50 +08:00