This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-26 13:55:17 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
csrc
/
quantization
/
cutlass_w8a8
History
bnellnm
5467ac3196
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (
#5047
)
2024-06-09 16:23:30 -04:00
..
broadcast_load_epilogue_c2x.hpp
[Kernel] Refactor CUTLASS kernels to always take scales that reside on the GPU (
#5137
)
2024-06-01 06:45:32 +00:00
broadcast_load_epilogue_c3x.hpp
[Kernel] Refactor CUTLASS kernels to always take scales that reside on the GPU (
#5137
)
2024-06-01 06:45:32 +00:00
common.hpp
[Kernel] Add w8a8 CUTLASS kernels (
#4749
)
2024-05-16 18:32:50 -04:00
scaled_mm_dq_c2x.cu
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (
#5047
)
2024-06-09 16:23:30 -04:00
scaled_mm_dq_c3x.cu
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (
#5047
)
2024-06-09 16:23:30 -04:00
scaled_mm_dq_entry.cu
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (
#5047
)
2024-06-09 16:23:30 -04:00