llama_index serving integration documentation (#6973)

Co-authored-by: pavanmantha <pavan.mantha@thevaslabs.io>
This commit is contained in:
Kameshwara Pavan Kumar Mantha 2024-08-15 04:08:37 +05:30 committed by GitHub
parent f55a9aea45
commit 22b39e11f2
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 28 additions and 0 deletions

View File

@ -12,3 +12,4 @@ Integrations
deploying_with_lws
deploying_with_dstack
serving_with_langchain
serving_with_llamaindex

View File

@ -0,0 +1,27 @@
.. _run_on_llamaindex:
Serving with llama_index
============================
vLLM is also available via `llama_index <https://github.com/run-llama/llama_index>`_ .
To install llamaindex, run
.. code-block:: console
$ pip install llama-index-llms-vllm -q
To run inference on a single or multiple GPUs, use ``Vllm`` class from ``llamaindex``.
.. code-block:: python
from llama_index.llms.vllm import Vllm
llm = Vllm(
model="microsoft/Orca-2-7b",
tensor_parallel_size=4,
max_new_tokens=100,
vllm_kwargs={"swap_space": 1, "gpu_memory_utilization": 0.5},
)
Please refer to this `Tutorial <https://docs.llamaindex.ai/en/latest/examples/llm/vllm/>`_ for more details.