Logo
Explore Help
Sign In
xinyun/vllm
1
0
Fork 0
You've already forked vllm
mirror of https://git.datalinker.icu/vllm-project/vllm.git synced 2026-03-30 03:37:28 +08:00
Code Issues Packages Projects Releases Wiki Activity
vllm/csrc/quantization
History
yewentao256 ae36150ec2 test
Signed-off-by: yewentao256 <zhyanwentao@126.com>
2025-10-03 13:35:52 -07:00
..
awq
…
cutlass_w4a8
[Kernel] Faster pre-processing time for W4A8 (#23972)
2025-09-17 14:35:32 -07:00
fp4
[Bugfix] Fix accuracy issue for silu_mul + nvfp4 quant fusion kernel (#24833)
2025-09-17 16:37:23 -07:00
fused_kernels
test
2025-10-03 13:35:52 -07:00
gguf
[Bugfix][ROCm] Fix for warp_size uses on host (#21205)
2025-07-24 00:37:19 -07:00
gptq
[MISC] Remove unused variableds in C++ (#19609)
2025-06-15 20:05:28 -07:00
gptq_allspark
…
gptq_marlin
[Kernel] [Quantization] Add MXFP4 and bias support for marlin kernel (#22428)
2025-08-14 11:23:22 -07:00
hadamard/hadacore
[Transform] Deterministic Hadacore Transforms (#24106)
2025-09-15 12:59:31 -06:00
machete
[Doc]: fix typos in Python comments (#24294)
2025-09-05 19:41:12 -07:00
marlin/sparse
[Kernel/Quant] Remove the original marlin format and qqq (#23204)
2025-08-20 15:13:36 -04:00
w8a8
test
2025-10-03 13:35:52 -07:00
activation_kernels.cu
test
2025-10-03 13:35:52 -07:00
utils.cuh
…
vectorization_utils.cuh
Make sure that vectorize_with_alignment produced vectorized global loads (#23182)
2025-08-21 20:06:54 +00:00
vectorization.cuh
[Perf] Tune scaled_fp8_quant by increasing vectorization (#18844)
2025-06-03 13:48:25 -07:00
Powered by Gitea Version: 1.23.1 Page: 7563ms Template: 9ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API