This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-01-02 06:20:53 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
vllm
/
platforms
History
vllmellm
ee14644ba9
[ROCm] Aiter Quant Kernels (
#25552
)
...
Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>
2025-12-09 14:27:37 +00:00
..
__init__.py
[TPU] Rename path to tpu platform (
#28452
)
2025-11-11 19:16:47 +00:00
cpu.py
[v1] Add PrefixLM support to FlexAttention backend (
#27938
)
2025-12-07 15:51:36 +00:00
cuda.py
[BugFix][DeepSeek-V3.2] Fix backend selection logic for Blackwell (
#30195
)
2025-12-07 10:53:51 -05:00
interface.py
[v1] Add PrefixLM support to FlexAttention backend (
#27938
)
2025-12-07 15:51:36 +00:00
rocm.py
[ROCm] Aiter Quant Kernels (
#25552
)
2025-12-09 14:27:37 +00:00
tpu.py
[v1] Add PrefixLM support to FlexAttention backend (
#27938
)
2025-12-07 15:51:36 +00:00
xpu.py
[v1] Add PrefixLM support to FlexAttention backend (
#27938
)
2025-12-07 15:51:36 +00:00