mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced 2025-12-14 04:04:57 +08:00
llama_index serving integration documentation (#6973)
Co-authored-by: pavanmantha <pavan.mantha@thevaslabs.io>
This commit is contained in:
parent
f55a9aea45
commit
22b39e11f2
@ -12,3 +12,4 @@ Integrations
|
|||||||
deploying_with_lws
|
deploying_with_lws
|
||||||
deploying_with_dstack
|
deploying_with_dstack
|
||||||
serving_with_langchain
|
serving_with_langchain
|
||||||
|
serving_with_llamaindex
|
||||||
|
|||||||
27
docs/source/serving/serving_with_llamaindex.rst
Normal file
27
docs/source/serving/serving_with_llamaindex.rst
Normal file
@ -0,0 +1,27 @@
|
|||||||
|
.. _run_on_llamaindex:
|
||||||
|
|
||||||
|
Serving with llama_index
|
||||||
|
============================
|
||||||
|
|
||||||
|
vLLM is also available via `llama_index <https://github.com/run-llama/llama_index>`_ .
|
||||||
|
|
||||||
|
To install llamaindex, run
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ pip install llama-index-llms-vllm -q
|
||||||
|
|
||||||
|
To run inference on a single or multiple GPUs, use ``Vllm`` class from ``llamaindex``.
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
|
from llama_index.llms.vllm import Vllm
|
||||||
|
|
||||||
|
llm = Vllm(
|
||||||
|
model="microsoft/Orca-2-7b",
|
||||||
|
tensor_parallel_size=4,
|
||||||
|
max_new_tokens=100,
|
||||||
|
vllm_kwargs={"swap_space": 1, "gpu_memory_utilization": 0.5},
|
||||||
|
)
|
||||||
|
|
||||||
|
Please refer to this `Tutorial <https://docs.llamaindex.ai/en/latest/examples/llm/vllm/>`_ for more details.
|
||||||
Loading…
x
Reference in New Issue
Block a user