This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-04-04 17:07:05 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
csrc
/
quantization
History
kliuae
7c25fe45a6
[AMD] Add support for GGUF quantization on ROCm (
#10254
)
2024-11-22 21:14:49 -08:00
..
aqlm
[Kernel] fix types used in aqlm and ggml kernels to support dynamo (
#7596
)
2024-08-16 14:00:11 -07:00
awq
…
compressed_tensors
[BugFix] [Kernel] Fix GPU SEGV occurring in int8 kernels (
#9391
)
2024-10-17 01:34:06 +00:00
cutlass_w8a8
[Kernel] Initial Machete W4A8 support + Refactors (
#9855
)
2024-11-18 12:59:29 -07:00
fp8
[torch.compile] Fuse RMSNorm with quant (
#9138
)
2024-11-08 21:20:08 +00:00
gguf
[AMD] Add support for GGUF quantization on ROCm (
#10254
)
2024-11-22 21:14:49 -08:00
gptq
…
gptq_marlin
[Model][Quantization] HQQ support through Marlin kernel expansion (
#9766
)
2024-11-19 13:31:12 -08:00
machete
[Kernel] Initial Machete W4A8 support + Refactors (
#9855
)
2024-11-18 12:59:29 -07:00
marlin
[Bugfix] Marlin 2:4 temp fix for large M dim (>256) (
#10464
)
2024-11-19 19:40:33 -08:00