This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-23 04:55:01 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
vllm
/
platforms
History
Chaojun Zhang
a4730c1b4f
[XPU]Fix crash due to removed VLLM_USE_V1 attribute (
#28520
)
...
Signed-off-by: chaojun-zhang <chaojun.zhang@intel.com>
2025-11-12 10:20:55 +00:00
..
__init__.py
[TPU] Rename path to tpu platform (
#28452
)
2025-11-11 19:16:47 +00:00
cpu.py
[CPU] Refactor CPU attention backend (
#27954
)
2025-11-12 09:43:06 +08:00
cuda.py
Prefer FlashAttention MLA as default over FlashMLA (
#27363
)
2025-11-11 17:13:51 +00:00
interface.py
[Attention] Refactor CUDA attention backend selection logic (
#24794
)
2025-11-11 07:40:44 -05:00
rocm.py
VLLM_USE_TRITON_FLASH_ATTN
V0 variable deprecation (
#27611
)
2025-11-11 18:34:36 -08:00
tpu.py
[Attention] Refactor CUDA attention backend selection logic (
#24794
)
2025-11-11 07:40:44 -05:00
xpu.py
[XPU]Fix crash due to removed VLLM_USE_V1 attribute (
#28520
)
2025-11-12 10:20:55 +00:00