mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced 2025-12-13 23:35:34 +08:00
[Doc] Add headings to improve gptqmodel.md (#17164)
Signed-off-by: windsonsea <haifeng.yao@daocloud.io>
This commit is contained in:
parent
a41351f363
commit
ef19e67d2c
@ -16,12 +16,16 @@ GPTQModel is one of the few quantization toolkits in the world that allows `Dyna
|
|||||||
is fully integrated into vLLM and backed up by support from the ModelCloud.AI team. Please refer to [GPTQModel readme](https://github.com/ModelCloud/GPTQModel?tab=readme-ov-file#dynamic-quantization-per-module-quantizeconfig-override)
|
is fully integrated into vLLM and backed up by support from the ModelCloud.AI team. Please refer to [GPTQModel readme](https://github.com/ModelCloud/GPTQModel?tab=readme-ov-file#dynamic-quantization-per-module-quantizeconfig-override)
|
||||||
for more details on this and other advanced features.
|
for more details on this and other advanced features.
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
You can quantize your own models by installing [GPTQModel](https://github.com/ModelCloud/GPTQModel) or picking one of the [5000+ models on Huggingface](https://huggingface.co/models?sort=trending&search=gptq).
|
You can quantize your own models by installing [GPTQModel](https://github.com/ModelCloud/GPTQModel) or picking one of the [5000+ models on Huggingface](https://huggingface.co/models?sort=trending&search=gptq).
|
||||||
|
|
||||||
```console
|
```console
|
||||||
pip install -U gptqmodel --no-build-isolation -v
|
pip install -U gptqmodel --no-build-isolation -v
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Quantizing a model
|
||||||
|
|
||||||
After installing GPTQModel, you are ready to quantize a model. Please refer to the [GPTQModel readme](https://github.com/ModelCloud/GPTQModel/?tab=readme-ov-file#quantization) for further details.
|
After installing GPTQModel, you are ready to quantize a model. Please refer to the [GPTQModel readme](https://github.com/ModelCloud/GPTQModel/?tab=readme-ov-file#quantization) for further details.
|
||||||
|
|
||||||
Here is an example of how to quantize `meta-llama/Llama-3.2-1B-Instruct`:
|
Here is an example of how to quantize `meta-llama/Llama-3.2-1B-Instruct`:
|
||||||
@ -49,12 +53,16 @@ model.quantize(calibration_dataset, batch_size=2)
|
|||||||
model.save(quant_path)
|
model.save(quant_path)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Running a quantized model with vLLM
|
||||||
|
|
||||||
To run an GPTQModel quantized model with vLLM, you can use [DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v2](https://huggingface.co/ModelCloud/DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v2) with the following command:
|
To run an GPTQModel quantized model with vLLM, you can use [DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v2](https://huggingface.co/ModelCloud/DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v2) with the following command:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
python examples/offline_inference/llm_engine_example.py --model DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v2
|
python examples/offline_inference/llm_engine_example.py --model DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v2
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Using GPTQModel with vLLM's Python API
|
||||||
|
|
||||||
GPTQModel quantized models are also supported directly through the LLM entrypoint:
|
GPTQModel quantized models are also supported directly through the LLM entrypoint:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
@ -67,14 +75,17 @@ prompts = [
|
|||||||
"The capital of France is",
|
"The capital of France is",
|
||||||
"The future of AI is",
|
"The future of AI is",
|
||||||
]
|
]
|
||||||
|
|
||||||
# Create a sampling params object.
|
# Create a sampling params object.
|
||||||
sampling_params = SamplingParams(temperature=0.6, top_p=0.9)
|
sampling_params = SamplingParams(temperature=0.6, top_p=0.9)
|
||||||
|
|
||||||
# Create an LLM.
|
# Create an LLM.
|
||||||
llm = LLM(model="DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v2")
|
llm = LLM(model="DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v2")
|
||||||
|
|
||||||
# Generate texts from the prompts. The output is a list of RequestOutput objects
|
# Generate texts from the prompts. The output is a list of RequestOutput objects
|
||||||
# that contain the prompt, generated text, and other information.
|
# that contain the prompt, generated text, and other information.
|
||||||
outputs = llm.generate(prompts, sampling_params)
|
outputs = llm.generate(prompts, sampling_params)
|
||||||
|
|
||||||
# Print the outputs.
|
# Print the outputs.
|
||||||
for output in outputs:
|
for output in outputs:
|
||||||
prompt = output.prompt
|
prompt = output.prompt
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user