# Kthena [**Kthena**](https://github.com/volcano-sh/kthena) is a Kubernetes-native LLM inference platform that transforms how organizations deploy and manage Large Language Models in production. Built with declarative model lifecycle management and intelligent request routing, it provides high performance and enterprise-grade scalability for LLM inference workloads. This guide shows how to deploy a production-grade, **multi-node vLLM** service on Kubernetes. We’ll: - Install the required components (Kthena + Volcano). - Deploy a multi-node vLLM model via Kthena’s `ModelServing` CR. - Validate the deployment. --- ## 1. Prerequisites You need: - A Kubernetes cluster with **GPU nodes**. - `kubectl` access with cluster-admin or equivalent permissions. - **Volcano** installed for gang scheduling. - **Kthena** installed with the `ModelServing` CRD available. - A valid **Hugging Face token** if loading models from Hugging Face Hub. ### 1.1 Install Volcano ```bash helm repo add volcano-sh https://volcano-sh.github.io/helm-charts helm repo update helm install volcano volcano-sh/volcano -n volcano-system --create-namespace ``` This provides the gang-scheduling and network topology features used by Kthena. ### 1.2 Install Kthena ```bash helm install kthena oci://ghcr.io/volcano-sh/charts/kthena --version v0.1.0 --namespace kthena-system --create-namespace ``` - The `kthena-system` namespace is created. - Kthena controllers and CRDs, including `ModelServing`, are installed and healthy. Validate: ```bash kubectl get crd | grep modelserving ``` You should see: ```text modelservings.workload.serving.volcano.sh ... ``` --- ## 2. The Multi-Node vLLM `ModelServing` Example Kthena provides an example manifest to deploy a **multi-node vLLM cluster running Llama**. Conceptually this is equivalent to the vLLM production stack Helm deployment, but expressed with `ModelServing`. A simplified version of the example (`llama-multinode`) looks like: - `spec.replicas: 1` – one `ServingGroup` (one logical model deployment). - `roles`: - `entryTemplate` – defines **leader** pods that run: - vLLM’s **multi-node cluster bootstrap script** (Ray cluster). - vLLM **OpenAI-compatible API server**. - `workerTemplate` – defines **worker** pods that join the leader’s Ray cluster. Key points from the example YAML: - **Image**: `vllm/vllm-openai:latest` (matches upstream vLLM images). - **Command** (leader): ```yaml command: - sh - -c - > bash /vllm-workspace/examples/online_serving/multi-node-serving.sh leader --ray_cluster_size=2; python3 -m vllm.entrypoints.openai.api_server --port 8080 --model meta-llama/Llama-3.1-405B-Instruct --tensor-parallel-size 8 --pipeline-parallel-size 2 ``` - **Command** (worker): ```yaml command: - sh - -c - > bash /vllm-workspace/examples/online_serving/multi-node-serving.sh worker --ray_address=$(ENTRY_ADDRESS) ``` --- ## 3. Deploying Multi-Node llama vLLM via Kthena ### 3.1 Prepare the Manifest **Recommended**: use a Secret instead of a raw env var: ```bash kubectl create secret generic hf-token \ -n default \ --from-literal=HUGGING_FACE_HUB_TOKEN='' ``` ### 3.2 Apply the `ModelServing` ```bash cat <---`. The first number indicates `ServingGroup`. The second (`405b`) is the `Role`. The remaining indices identify the pod within the role. --- ## 6. Accessing the vLLM OpenAI-Compatible API Expose the entry via a Service: ```yaml apiVersion: v1 kind: Service metadata: name: llama-multinode-openai namespace: default spec: selector: modelserving.volcano.sh/name: llama-multinode modelserving.volcano.sh/entry: "true" # optionally further narrow to leader role if you label it ports: - name: http port: 80 targetPort: 8080 type: ClusterIP ``` Port-forward from your local machine: ```bash kubectl port-forward svc/llama-multinode-openai 30080:80 -n default ``` Then: - List models: ```bash curl -s http://localhost:30080/v1/models ``` - Send a completion request (mirroring vLLM production stack docs): ```bash curl -X POST http://localhost:30080/v1/completions \ -H "Content-Type: application/json" \ -d '{ "model": "meta-llama/Llama-3.1-405B-Instruct", "prompt": "Once upon a time,", "max_tokens": 10 }' ``` You should see an OpenAI-style response from vLLM. --- ## 7. Clean Up To remove the deployment and its resources: ```bash kubectl delete modelserving llama-multinode -n default ``` If you’re done with the entire stack: ```bash helm uninstall kthena -n kthena-system # or your Kthena release name helm uninstall volcano -n volcano-system ```