Logo
Explore Help
Sign In
xinyun/vllm
1
0
Fork 0
You've already forked vllm
mirror of https://git.datalinker.icu/vllm-project/vllm.git synced 2026-03-27 11:05:52 +08:00
Code Issues Packages Projects Releases Wiki Activity
vllm/csrc/quantization
History
Sage Moore 460a2b1100
[torch.compile] Add torch inductor pass for fusing silu_and_mul with subsequent scaled_fp8_quant operations (#10867)
Signed-off-by: Sage Moore <sage@neuralmagic.com>
2025-05-01 07:59:28 -07:00
..
aqlm
…
awq
…
compressed_tensors
…
cutlass_w8a8
[Bugfix] Fix cutlass dispatch for fp8/int8 to properly invoke M<=16 c… (#16751)
2025-04-27 19:38:42 -07:00
fp4
[NVIDIA] Support Cutlass MLA for Blackwell GPUs (#16032)
2025-04-27 06:29:21 -07:00
fp8
…
fused_kernels
[Bugfix] Fix numel() downcast in fused_layernorm_dynamic_per_token_quant.cu (#17316)
2025-04-28 19:23:18 -07:00
gguf
[BugFix][ROCm] Fix GGUF MoE Dispatch Block_Dim for ROCm (#16247)
2025-04-08 05:10:26 -07:00
gptq
…
gptq_allspark
pre-commit autoupdate (#17380)
2025-04-29 06:46:55 -07:00
gptq_marlin
pre-commit autoupdate (#17380)
2025-04-29 06:46:55 -07:00
machete
…
marlin
pre-commit autoupdate (#17380)
2025-04-29 06:46:55 -07:00
activation_kernels.cu
[torch.compile] Add torch inductor pass for fusing silu_and_mul with subsequent scaled_fp8_quant operations (#10867)
2025-05-01 07:59:28 -07:00
utils.cuh
…
vectorization.cuh
…
Powered by Gitea Version: 1.23.1 Page: 6724ms Template: 17ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API