This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-03-24 01:03:44 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
kernels
History
jvlunteren
01a583fea4
[Kernel] Decouple Tile Size from Block Size in Triton Unified Attention Kernel (
#21197
)
...
Signed-off-by: Jan van Lunteren <jvl@zurich.ibm.com>
2025-09-18 14:27:01 +00:00
..
attention
[Kernel] Decouple Tile Size from Block Size in Triton Unified Attention Kernel (
#21197
)
2025-09-18 14:27:01 +00:00
core
[Chore] Remove unused batched RoPE op & kernel (
#24789
)
2025-09-13 00:08:20 -07:00
mamba
…
moe
[Kernel] Delegate construction of FusedMoEQuantConfig to FusedMoEMethodBase subclasses (
#22537
)
2025-09-17 17:43:31 -06:00
quantization
[Kernel] Delegate construction of FusedMoEQuantConfig to FusedMoEMethodBase subclasses (
#22537
)
2025-09-17 17:43:31 -06:00
__init__.py
…
allclose_default.py
…
quant_utils.py
…
test_apply_repetition_penalties.py
…
test_flex_attention.py
…
test_fused_quant_activation.py
…
test_onednn.py
…
test_shuffle_rows.py
…
test_triton_flash_attention.py
…
utils.py
…