diff --git a/docs/serving/expert_parallel_deployment.md b/docs/serving/expert_parallel_deployment.md index f8701870864dc..494d2ad021e71 100644 --- a/docs/serving/expert_parallel_deployment.md +++ b/docs/serving/expert_parallel_deployment.md @@ -158,10 +158,10 @@ vllm serve Qwen/Qwen3-30B-A3B \ ### Memory Footprint Overhead -EPLB uses redundant experts to that need to fit in GPU memory. This means that EPLB may not be a good fit for memory constrained environments or when KV cache space is at a premium. +EPLB uses redundant experts that need to fit in GPU memory. This means that EPLB may not be a good fit for memory constrained environments or when KV cache space is at a premium. This overhead equals `NUM_MOE_LAYERS * BYTES_PER_EXPERT * (NUM_TOTAL_EXPERTS + NUM_REDUNDANT_EXPERTS) รท NUM_EP_RANKS`. -For DeepSeekV3, this is approximately `2.4 GB` for one redundant expert per rank. +For DeepSeekV3, this is approximately `2.4 GB` for one redundant expert per EP rank. ### Example Command