mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced 2025-12-28 15:11:56 +08:00
6 lines
433 B
Markdown
6 lines
433 B
Markdown
# KServe
|
|
|
|
vLLM can be deployed with [KServe](https://github.com/kserve/kserve) on Kubernetes for highly scalable distributed model serving.
|
|
|
|
You can use vLLM with KServe's [Hugging Face serving runtime](https://kserve.github.io/website/docs/model-serving/generative-inference/overview) or via [`LLMInferenceService` that uses llm-d](https://kserve.github.io/website/docs/model-serving/generative-inference/llmisvc/llmisvc-overview).
|