Merge 27067329f95318c2e176cec8b69500da85374fb6 into 9b4e9788e4a3a731f7567338ed15d3ec549ce03b

This commit is contained in:
neolithic5452 2025-08-28 14:35:05 +08:00 committed by GitHub
commit c5aa6af85b
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -322,7 +322,7 @@ For comprehensive step-by-step instructions on running DeepSeek-V3 with LMDeploy
### 6.4 Inference with TRT-LLM (recommended)
[TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) now supports the DeepSeek-V3 model, offering precision options such as BF16 and INT4/INT8 weight-only. Support for FP8 is currently in progress and will be released soon. You can access the custom branch of TRTLLM specifically for DeepSeek-V3 support through the following link to experience the new features directly: https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/deepseek_v3.
[TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) now supports the DeepSeek-V3 model, offering precision options such as BF16 and INT4/INT8 weight-only. Support for FP8 is currently in progress and will be released soon. You can access the custom branch of TRTLLM specifically for DeepSeek-V3 support through the following link to experience the new features directly: [https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/deepseek_v3](https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/models/core/deepseek_v3).
### 6.5 Inference with vLLM (recommended)