mirror of
https://git.datalinker.icu/deepseek-ai/DeepSeek-V3.git
synced 2026-01-23 10:34:25 +08:00
Fix link to DeepSeek-V3 model in README
Updated the text inside the square brackets [ ] to accurately describe the destination Ensured the URL inside the parentheses ( ) is the correct one
This commit is contained in:
parent
27067329f9
commit
10adde6a8e
@ -322,7 +322,7 @@ For comprehensive step-by-step instructions on running DeepSeek-V3 with LMDeploy
|
||||
|
||||
### 6.4 Inference with TRT-LLM (recommended)
|
||||
|
||||
[TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) now supports the DeepSeek-V3 model, offering precision options such as BF16 and INT4/INT8 weight-only. Support for FP8 is currently in progress and will be released soon. You can access the custom branch of TRTLLM specifically for DeepSeek-V3 support through the following link to experience the new features directly: [https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/deepseek_v3](https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/models/core/deepseek_v3).
|
||||
[TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) now supports the DeepSeek-V3 model, offering precision options such as BF16 and INT4/INT8 weight-only. Support for FP8 is currently in progress and will be released soon. You can access the custom branch of TRTLLM specifically for DeepSeek-V3 support through the following link to experience the new features directly: [https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/models/core/deepseek_v3](https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/models/core/deepseek_v3).
|
||||
|
||||
|
||||
### 6.5 Inference with vLLM (recommended)
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user