This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-10 08:04:58 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
csrc
/
quantization
History
czhu-cohere
cdd7025961
[kernel] Improve FP8 PTPC on Hopper for larger shapes (
#28692
)
...
Signed-off-by: czhu-cohere <conway.zhu@cohere.com>
2025-11-14 09:59:11 -08:00
..
awq
…
cutlass_w4a8
…
fp4
[Bugfix] Use latency MOE backend as default for Flashinfer and other misc fixes (
#27439
)
2025-11-07 04:18:39 -08:00
fused_kernels
[torch.compile] Enable attention and allreduce fusion without custom ops enabled (
#24604
)
2025-10-17 08:10:23 -06:00
gguf
…
gptq
[Kernel] Add GPTQv2 format support for low-bit or asymmetric quantization, by adapting gptq_gemm (
#26092
)
2025-10-23 23:26:13 -04:00
gptq_allspark
…
gptq_marlin
Rewrite C++ meta funcs to Python (
#28595
)
2025-11-14 00:52:50 +08:00
hadamard
/hadacore
…
machete
Update
Optional[x]
->
x | None
and
Union[x, y]
to
x | y
(
#26633
)
2025-10-12 09:51:31 -07:00
marlin
/sparse
…
w8a8
[kernel] Improve FP8 PTPC on Hopper for larger shapes (
#28692
)
2025-11-14 09:59:11 -08:00
activation_kernels.cu
[Performance][B200] silu_mul_quant: pack scales in int32 (
#28358
)
2025-11-13 10:16:55 -08:00
utils.cuh
…
vectorization_utils.cuh
…
vectorization.cuh
…