This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-23 23:55:50 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
csrc
/
moe
History
Bhanu Prakash Voutharoja
6a6fc41c79
gptq marlin quantization support for fused moe with lora (
#30254
)
...
Signed-off-by: Bhanu068 <voutharoja.bhanu06@gmail.com>
2025-12-12 02:27:22 +00:00
..
marlin_moe_wna16
gptq marlin quantization support for fused moe with lora (
#30254
)
2025-12-12 02:27:22 +00:00
permute_unpermute_kernels
…
dynamic_4bit_int_moe_cpu.cpp
[CPU]Parallelize over tokens in int4 moe (
#29600
)
2025-12-02 06:21:39 +00:00
grouped_topk_kernels.cu
[Refactor] Remove useless syncwarp (
#30510
)
2025-12-11 17:43:41 -05:00
moe_align_sum_kernels.cu
Lora MoE Align Improvements (
#29257
)
2025-12-09 10:35:16 +08:00
moe_ops.h
Lora MoE Align Improvements (
#29257
)
2025-12-09 10:35:16 +08:00
moe_permute_unpermute_op.cu
…
moe_wna16_utils.h
…
moe_wna16.cu
…
topk_softmax_kernels.cu
…
torch_bindings.cpp
Lora MoE Align Improvements (
#29257
)
2025-12-09 10:35:16 +08:00