Yuan Tang 0736f901e7
docs: Add llm-d integration to the website (#31234)
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2025-12-23 20:27:22 +00:00

603 B

llm-d

vLLM can be deployed with llm-d, a Kubernetes-native distributed inference serving stack providing well-lit paths for anyone to serve large generative AI models at scale. It helps achieve the fastest "time to state-of-the-art (SOTA) performance" for key OSS models across most hardware accelerators and infrastructure providers.

You can use vLLM with llm-d directly by following this guide or via KServe's LLMInferenceService.