mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced 2026-01-02 20:33:10 +08:00
535 B
535 B
Quantization
Quantization trades off model precision for smaller memory footprint, allowing large models to be run on a wider range of devices.
Contents: