Logo
Explore Help
Sign In
xinyun/vllm
1
0
Fork 0
You've already forked vllm
mirror of https://git.datalinker.icu/vllm-project/vllm.git synced 2026-03-21 18:43:35 +08:00
Code Issues Packages Projects Releases Wiki Activity
vllm/csrc/moe
History
Bhanu Prakash Voutharoja 6a6fc41c79
gptq marlin quantization support for fused moe with lora (#30254)
Signed-off-by: Bhanu068 <voutharoja.bhanu06@gmail.com>
2025-12-12 02:27:22 +00:00
..
marlin_moe_wna16
gptq marlin quantization support for fused moe with lora (#30254)
2025-12-12 02:27:22 +00:00
permute_unpermute_kernels
Fix CUDA permute/unpermute for use with DeepGemm Moe (#17934)
2025-07-27 07:08:00 -07:00
dynamic_4bit_int_moe_cpu.cpp
[CPU]Parallelize over tokens in int4 moe (#29600)
2025-12-02 06:21:39 +00:00
grouped_topk_kernels.cu
[Refactor] Remove useless syncwarp (#30510)
2025-12-11 17:43:41 -05:00
moe_align_sum_kernels.cu
Lora MoE Align Improvements (#29257)
2025-12-09 10:35:16 +08:00
moe_ops.h
Lora MoE Align Improvements (#29257)
2025-12-09 10:35:16 +08:00
moe_permute_unpermute_op.cu
[Kernel] CUTLASS MoE FP8: Integrate cuda moe permute/unpermute (#23045)
2025-08-20 10:35:26 -04:00
moe_wna16_utils.h
…
moe_wna16.cu
…
topk_softmax_kernels.cu
[Kernel][Performance] Fuse float cast and renormalize to topk softmax kernel (#26717)
2025-10-17 07:30:35 +00:00
torch_bindings.cpp
Lora MoE Align Improvements (#29257)
2025-12-09 10:35:16 +08:00
Powered by Gitea Version: 1.23.1 Page: 1149ms Template: 6ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API