This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-01-10 08:05:16 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
kernels
History
Pavani Majety
72b1c2ae2c
[Bugfix] Use latency MOE backend as default for Flashinfer and other misc fixes (
#27439
)
...
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
2025-11-07 04:18:39 -08:00
..
attention
[ROCm][MLA] Support block-size > 1 for AITER MLA backend (
#27224
)
2025-11-05 10:43:02 -05:00
core
[Chore] Separate out
vllm.utils.platform_utils.py
(
#27374
)
2025-10-23 19:08:06 +00:00
mamba
[V1] [Hybrid] Mamba1 Automatic Prefix Caching (
#26377
)
2025-11-02 04:16:23 -08:00
moe
Bugfix: Cutlass FP8 FusedMoE bad scaling factors (
#27255
)
2025-11-05 06:06:06 -05:00
quantization
[Bugfix] Use latency MOE backend as default for Flashinfer and other misc fixes (
#27439
)
2025-11-07 04:18:39 -08:00
__init__.py
…
allclose_default.py
…
quant_utils.py
[Chore]:Extract math and argparse utilities to separate modules (
#27188
)
2025-10-26 04:03:32 -07:00
test_apply_repetition_penalties.py
…
test_fla_layernorm_guard.py
…
test_flex_attention.py
…
test_fused_quant_activation.py
…
test_onednn.py
…
test_shuffle_rows.py
…
test_top_k_per_row.py
[Deepseek v3.2] Remove extra logics in indexer (
#26465
)
2025-10-21 23:34:03 +00:00
test_triton_flash_attention.py
…
utils.py
[Chore] Clean up pytorch helper functions in
vllm.utils
(
#26908
)
2025-10-18 09:48:22 -07:00