From 7377dd0307a56a3a5cd0214a8b7226e9ebdc5ad6 Mon Sep 17 00:00:00 2001 From: Reid <61492567+reidliu41@users.noreply.github.com> Date: Wed, 7 May 2025 20:29:05 +0800 Subject: [PATCH] [doc] update the issue link (#17782) Signed-off-by: reidliu41 Co-authored-by: reidliu41 --- docs/source/features/quantization/fp8.md | 2 +- docs/source/features/quantization/int4.md | 2 +- docs/source/features/quantization/int8.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/source/features/quantization/fp8.md b/docs/source/features/quantization/fp8.md index 21969bbc2b9f7..cb304d54726c8 100644 --- a/docs/source/features/quantization/fp8.md +++ b/docs/source/features/quantization/fp8.md @@ -117,7 +117,7 @@ Here's an example of the resulting scores: ## Troubleshooting and Support -If you encounter any issues or have feature requests, please open an issue on the `vllm-project/llm-compressor` GitHub repository. +If you encounter any issues or have feature requests, please open an issue on the [vllm-project/llm-compressor](https://github.com/vllm-project/llm-compressor/issues) GitHub repository. ## Online Dynamic Quantization diff --git a/docs/source/features/quantization/int4.md b/docs/source/features/quantization/int4.md index be48788a4ef60..7a0ab4ad229e6 100644 --- a/docs/source/features/quantization/int4.md +++ b/docs/source/features/quantization/int4.md @@ -169,4 +169,4 @@ recipe = GPTQModifier( ## Troubleshooting and Support -If you encounter any issues or have feature requests, please open an issue on the [`vllm-project/llm-compressor`](https://github.com/vllm-project/llm-compressor) GitHub repository. The full INT4 quantization example in `llm-compressor` is available [here](https://github.com/vllm-project/llm-compressor/blob/main/examples/quantization_w4a16/llama3_example.py). +If you encounter any issues or have feature requests, please open an issue on the [vllm-project/llm-compressor](https://github.com/vllm-project/llm-compressor/issues) GitHub repository. The full INT4 quantization example in `llm-compressor` is available [here](https://github.com/vllm-project/llm-compressor/blob/main/examples/quantization_w4a16/llama3_example.py). diff --git a/docs/source/features/quantization/int8.md b/docs/source/features/quantization/int8.md index d6ddca18e2686..1e4b01d35575c 100644 --- a/docs/source/features/quantization/int8.md +++ b/docs/source/features/quantization/int8.md @@ -138,4 +138,4 @@ Quantized models can be sensitive to the presence of the `bos` token. Make sure ## Troubleshooting and Support -If you encounter any issues or have feature requests, please open an issue on the [`vllm-project/llm-compressor`](https://github.com/vllm-project/llm-compressor) GitHub repository. +If you encounter any issues or have feature requests, please open an issue on the [vllm-project/llm-compressor](https://github.com/vllm-project/llm-compressor/issues) GitHub repository.