Logo
Explore Help
Sign In
xinyun/vllm
1
0
Fork 0
You've already forked vllm
mirror of https://git.datalinker.icu/vllm-project/vllm.git synced 2025-12-10 07:15:01 +08:00
Code Issues Packages Projects Releases Wiki Activity
vllm/csrc/quantization
History
Luka Govedič 4f93dfe952
[torch.compile] Fuse RMSNorm with quant (#9138)
Signed-off-by: luka <luka@neuralmagic.com>
Co-authored-by: youkaichao <youkaichao@126.com>
2024-11-08 21:20:08 +00:00
..
aqlm
[Kernel] fix types used in aqlm and ggml kernels to support dynamo (#7596)
2024-08-16 14:00:11 -07:00
awq
[CI/Build] Suppress divide-by-zero and missing return statement warnings (#7001)
2024-08-05 16:00:01 -04:00
compressed_tensors
[BugFix] [Kernel] Fix GPU SEGV occurring in int8 kernels (#9391)
2024-10-17 01:34:06 +00:00
cutlass_w8a8
[Bugfix] Fix spurious "No compiled cutlass_scaled_mm ..." for W8A8 on Turing (#9487)
2024-10-22 15:41:13 -07:00
fp8
[torch.compile] Fuse RMSNorm with quant (#9138)
2024-11-08 21:20:08 +00:00
gguf
[Bugfix][Kernel] Fix build for sm_60 in GGUF kernel (#8506)
2024-09-16 12:15:57 -06:00
gptq
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (#5047)
2024-06-09 16:23:30 -04:00
gptq_marlin
[Bugfix] Fix support for dimension like integers and ScalarType (#9299)
2024-10-17 19:08:34 +00:00
machete
[CI/Build] drop support for Python 3.8 EOL (#8464)
2024-11-06 07:11:55 +00:00
marlin
[Bugfix] Fix support for dimension like integers and ScalarType (#9299)
2024-10-17 19:08:34 +00:00
Powered by Gitea Version: 1.23.1 Page: 95ms Template: 5ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API