mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced 2025-12-10 19:14:57 +08:00
306 B
306 B
(quantization-index)=
Quantization
Quantization trades off model precision for smaller memory footprint, allowing large models to be run on a wider range of devices.
:::{toctree} :caption: Contents :maxdepth: 1
supported_hardware auto_awq bnb gguf gptqmodel int4 int8 fp8 quark quantized_kvcache :::