This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-01-19 03:44:28 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
vllm
/
platforms
History
Matthew Bonanni
a742322092
[Attention] Blackwell FP8 MLA support with CUTLASS_MLA backend (
#23289
)
...
Signed-off-by: Matthew Bonanni <mbonanni@redhat.com>
2025-09-03 14:05:24 -04:00
..
__init__.py
[TPU] Support Pathways in vLLM (
#21417
)
2025-07-30 10:02:12 -07:00
cpu.py
[CPU] Enable data parallel for CPU backend (
#23903
)
2025-08-29 02:19:58 -07:00
cuda.py
[Attention] Blackwell FP8 MLA support with CUTLASS_MLA backend (
#23289
)
2025-09-03 14:05:24 -04:00
interface.py
[Doc]: fix typos in Python comments (
#24001
)
2025-08-31 08:21:59 +00:00
neuron.py
[Refactor]Abstract Platform Interface for Distributed Backend and Add xccl Support for Intel XPU (
#19410
)
2025-07-07 04:32:32 +00:00
rocm.py
[XPU] Add xpu torch.compile support (
#22609
)
2025-08-27 05:33:27 +00:00
tpu.py
[Kernel] Add FP8 support with FlashMLA backend (
#22668
)
2025-08-22 02:26:32 +00:00
xpu.py
[XPU] Fix the bug of LoRA logits on the XPU platform (
#24081
)
2025-09-03 08:21:18 +08:00