vllm/docs/serving/integrations/llamaindex.md
Harry Mellor b942c094e3
Stop using title frontmatter and fix doc that can only be reached by search (#20623)
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
2025-07-08 03:27:40 -07:00

588 B

LlamaIndex

vLLM is also available via LlamaIndex .

To install LlamaIndex, run

pip install llama-index-llms-vllm -q

To run inference on a single or multiple GPUs, use Vllm class from llamaindex.

from llama_index.llms.vllm import Vllm

llm = Vllm(
    model="microsoft/Orca-2-7b",
    tensor_parallel_size=4,
    max_new_tokens=100,
    vllm_kwargs={"swap_space": 1, "gpu_memory_utilization": 0.5},
)

Please refer to this Tutorial for more details.