This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-03-21 23:35:46 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
quantization
History
Mor Zusman
7fc23be81c
[Kernel] W8A16 Int8 inside FusedMoE (
#7415
)
2024-08-16 10:06:51 -07:00
..
__init__.py
…
test_bitsandbytes.py
[bitsandbytes]: support read bnb pre-quantized model (
#5753
)
2024-07-23 23:45:09 +00:00
test_compressed_tensors.py
[Misc] Revert
compressed-tensors
code reuse (
#7521
)
2024-08-14 15:07:37 -07:00
test_configs.py
[Kernel][Core] Add AWQ support to the Marlin kernel (
#6612
)
2024-07-21 19:41:42 -04:00
test_cpu_offload.py
[CI] Move quantization cpu offload tests out of fastcheck (
#7574
)
2024-08-15 21:16:20 -07:00
test_experts_int8.py
[Kernel] W8A16 Int8 inside FusedMoE (
#7415
)
2024-08-16 10:06:51 -07:00
test_fp8.py
[Misc/Testing] Use
torch.testing.assert_close
(
#7324
)
2024-08-16 04:24:04 +00:00
test_lm_head.py
[Core] Support loading GGUF model (
#5191
)
2024-08-05 17:54:23 -06:00
utils.py
[hardware][misc] introduce platform abstraction (
#6080
)
2024-07-02 20:12:22 -07:00