From f716a153723d4b2e18d01380cfe25d9ac636e2ef Mon Sep 17 00:00:00 2001 From: Yuan Tang Date: Mon, 24 Nov 2025 09:40:05 -0500 Subject: [PATCH] Update KServe guide link in documentation (#29258) Signed-off-by: Yuan Tang --- docs/deployment/integrations/kserve.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/deployment/integrations/kserve.md b/docs/deployment/integrations/kserve.md index edf79fca4f93e..37b29aa1a4876 100644 --- a/docs/deployment/integrations/kserve.md +++ b/docs/deployment/integrations/kserve.md @@ -2,4 +2,4 @@ vLLM can be deployed with [KServe](https://github.com/kserve/kserve) on Kubernetes for highly scalable distributed model serving. -Please see [this guide](https://kserve.github.io/website/latest/modelserving/v1beta1/llm/huggingface/) for more details on using vLLM with KServe. +Please see [this guide](https://kserve.github.io/website/docs/model-serving/generative-inference/overview) for more details on using vLLM with KServe.