From 27067329f95318c2e176cec8b69500da85374fb6 Mon Sep 17 00:00:00 2001 From: neolithic5452 <106451361+fernandaspets@users.noreply.github.com> Date: Thu, 21 Aug 2025 18:10:39 -0700 Subject: [PATCH] Fix broken TensorRT-LLM link to deepseekv3 Fix broken TensorRT-LLM link to deepseekv3 --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index e94a77f..8bc28d3 100644 --- a/README.md +++ b/README.md @@ -322,7 +322,7 @@ For comprehensive step-by-step instructions on running DeepSeek-V3 with LMDeploy ### 6.4 Inference with TRT-LLM (recommended) -[TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) now supports the DeepSeek-V3 model, offering precision options such as BF16 and INT4/INT8 weight-only. Support for FP8 is currently in progress and will be released soon. You can access the custom branch of TRTLLM specifically for DeepSeek-V3 support through the following link to experience the new features directly: https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/deepseek_v3. +[TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) now supports the DeepSeek-V3 model, offering precision options such as BF16 and INT4/INT8 weight-only. Support for FP8 is currently in progress and will be released soon. You can access the custom branch of TRTLLM specifically for DeepSeek-V3 support through the following link to experience the new features directly: [https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/deepseek_v3](https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/models/core/deepseek_v3). ### 6.5 Inference with vLLM (recommended)