Logo
Explore Help
Sign In
xinyun/vllm
1
0
Fork 0
You've already forked vllm
mirror of https://git.datalinker.icu/vllm-project/vllm.git synced 2025-12-09 02:15:01 +08:00
Code Issues Packages Projects Releases Wiki Activity
vllm/csrc/moe
History
Bram Wasti 3263799056
[unrevert] Add batch invariant kernel override for FlashInfer backend [2/n] (#26373)
Signed-off-by: Bram Wasti <bwasti@meta.com>
Signed-off-by: Bram Wasti <bwasti@fb.com>
2025-10-13 10:24:53 -04:00
..
marlin_moe_wna16
Convert formatting to use ruff instead of yapf + isort (#26247)
2025-10-05 07:06:22 -07:00
permute_unpermute_kernels
Fix CUDA permute/unpermute for use with DeepGemm Moe (#17934)
2025-07-27 07:08:00 -07:00
dynamic_4bit_int_moe_cpu.cpp
[fix]: add Arm 4bit fused moe support (#23809)
2025-09-24 01:32:22 +00:00
grouped_topk_kernels.cu
Use macro guard CUDA functions for back compatibility in grouped_topk_kernel.cu (#25346)
2025-09-23 09:45:39 -07:00
moe_align_sum_kernels.cu
[Model] Add LongCat-Flash (#23991)
2025-09-24 21:53:40 -07:00
moe_ops.h
[Kernel] Add fused grouped_topk kernel for MoE (#23274)
2025-08-25 11:47:52 -07:00
moe_permute_unpermute_op.cu
[Kernel] CUTLASS MoE FP8: Integrate cuda moe permute/unpermute (#23045)
2025-08-20 10:35:26 -04:00
moe_wna16_utils.h
pre-commit autoupdate (#17380)
2025-04-29 06:46:55 -07:00
moe_wna16.cu
[BugFix] Accuracy fix for llama4 int4 - improperly casted scales (#16801)
2025-04-17 22:13:29 -07:00
topk_softmax_kernels.cu
[unrevert] Add batch invariant kernel override for FlashInfer backend [2/n] (#26373)
2025-10-13 10:24:53 -04:00
torch_bindings.cpp
[Kernel] Add fused grouped_topk kernel for MoE (#23274)
2025-08-25 11:47:52 -07:00
Powered by Gitea Version: 1.23.1 Page: 630ms Template: 4ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API