This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-03-16 21:07:32 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
quantization
History
Dipika Sikka
8cef6e02dc
[Misc] add w8a8 asym models (
#11075
)
2024-12-23 13:33:20 -05:00
..
__init__.py
…
test_bitsandbytes.py
[Bugfix] bitsandbytes models fail to run pipeline parallel (
#10200
)
2024-11-13 09:56:39 -07:00
test_compressed_tensors.py
[Misc] add w8a8 asym models (
#11075
)
2024-12-23 13:33:20 -05:00
test_configs.py
[Model] Add user-configurable task for models that support both generation and embedding (
#9424
)
2024-10-18 11:31:58 -07:00
test_cpu_offload.py
…
test_experts_int8.py
…
test_fp8.py
[CI/Build] Avoid CUDA initialization (
#8534
)
2024-09-18 10:38:11 +00:00
test_ipex_quant.py
[Hardware][XPU] AWQ/GPTQ support for xpu backend (
#10107
)
2024-11-18 11:18:05 -07:00
test_lm_head.py
…
utils.py
[torch.compile] limit inductor threads and lazy import quant (
#10482
)
2024-11-20 18:36:33 -08:00