Yuan Tang f716a15372
Update KServe guide link in documentation (#29258)
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2025-11-24 14:40:05 +00:00

6 lines
292 B
Markdown

# KServe
vLLM can be deployed with [KServe](https://github.com/kserve/kserve) on Kubernetes for highly scalable distributed model serving.
Please see [this guide](https://kserve.github.io/website/docs/model-serving/generative-inference/overview) for more details on using vLLM with KServe.