This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-16 13:06:14 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
kernels
History
Huamin Li
3fd1fb0b60
Revert "[LoRA] Support FusedMoE LoRA Triton kernel for mxfp4 (
#28971
)" (
#29697
)
...
Signed-off-by: Huamin Li <3ericli@gmail.com>
2025-11-28 15:26:52 -08:00
..
attention
[Misc] Remove redundant attention var constants (
#29650
)
2025-11-28 04:35:19 -08:00
core
Update
rope_scaling
to
rope_parameters
in preparation for Transformers v5 (
#28542
)
2025-11-19 09:06:36 -08:00
mamba
[V1] [Hybrid] Mamba1 Automatic Prefix Caching (
#26377
)
2025-11-02 04:16:23 -08:00
moe
Revert "[LoRA] Support FusedMoE LoRA Triton kernel for mxfp4 (
#28971
)" (
#29697
)
2025-11-28 15:26:52 -08:00
quantization
[Performance] Reduce DeepGEMM N dim restriction from 128 to 64 multiplier (
#28687
)
2025-11-19 15:47:13 -08:00
__init__.py
…
allclose_default.py
…
quant_utils.py
…
test_apply_repetition_penalties.py
…
test_cache_kernels.py
[Bugfix][cache_kernels]: Fix OOB in cache_kernels.cu (
#28760
)
2025-11-20 02:52:02 -08:00
test_fla_layernorm_guard.py
…
test_flex_attention.py
…
test_fused_quant_activation.py
…
test_onednn.py
[CPU] Refactor CPU attention backend (
#27954
)
2025-11-12 09:43:06 +08:00
test_shuffle_rows.py
…
test_top_k_per_row.py
…
utils.py
[Misc] Remove redundant attention var constants (
#29650
)
2025-11-28 04:35:19 -08:00