This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-03-28 09:12:36 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
kernels
History
Yongye Zhu
7b926e8901
[MoE Refactor][9/N] Use modular kernel for unquantized Triton MoE (
#31052
)
...
Signed-off-by: Yongye Zhu <zyy1102000@gmail.com>
2025-12-22 17:34:19 +00:00
..
attention
[MM Encoder]: Migrate legacy ViT
MultiHeadAttention
to new
MMEncoderAttention
interface (
#30684
)
2025-12-19 02:04:19 +08:00
core
[Kernel] Enable fused_qknorm_rope_kernel supports partial rope (
#30821
)
2025-12-21 18:39:22 -08:00
mamba
…
moe
[MoE Refactor][9/N] Use modular kernel for unquantized Triton MoE (
#31052
)
2025-12-22 17:34:19 +00:00
quantization
[Bugfix] awq_gemm: fix argument order swap (
#30364
)
2025-12-14 18:15:37 +08:00
__init__.py
…
allclose_default.py
…
quant_utils.py
…
test_apply_repetition_penalties.py
…
test_cache_kernels.py
…
test_fla_layernorm_guard.py
…
test_flex_attention.py
[Fix][FlexAttention] return max logical block index to handle reused blocks (
#30915
)
2025-12-18 06:42:21 +00:00
test_fused_quant_activation.py
…
test_onednn.py
…
test_shuffle_rows.py
…
test_top_k_per_row.py
…
utils.py
…