This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-03-22 21:54:41 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
quantization
History
youkaichao
388ee3de66
[torch.compile] limit inductor threads and lazy import quant (
#10482
)
...
Signed-off-by: youkaichao <youkaichao@gmail.com>
2024-11-20 18:36:33 -08:00
..
__init__.py
…
test_bitsandbytes.py
[Bugfix] bitsandbytes models fail to run pipeline parallel (
#10200
)
2024-11-13 09:56:39 -07:00
test_compressed_tensors.py
[bugfix] Fix static asymmetric quantization case (
#10334
)
2024-11-15 09:35:11 +08:00
test_configs.py
[Model] Add user-configurable task for models that support both generation and embedding (
#9424
)
2024-10-18 11:31:58 -07:00
test_cpu_offload.py
[ci][test] adjust max wait time for cpu offloading test (
#7709
)
2024-08-20 17:12:44 -07:00
test_experts_int8.py
[Kernel] W8A16 Int8 inside FusedMoE (
#7415
)
2024-08-16 10:06:51 -07:00
test_fp8.py
[CI/Build] Avoid CUDA initialization (
#8534
)
2024-09-18 10:38:11 +00:00
test_ipex_quant.py
[Hardware][XPU] AWQ/GPTQ support for xpu backend (
#10107
)
2024-11-18 11:18:05 -07:00
test_lm_head.py
…
utils.py
[torch.compile] limit inductor threads and lazy import quant (
#10482
)
2024-11-20 18:36:33 -08:00