mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced 2025-12-15 07:45:01 +08:00
docs: Add BentoML deployment doc (#3336)
Signed-off-by: Sherlock113 <sherlockxu07@gmail.com>
This commit is contained in:
parent
654865e21d
commit
b0925b3878
@ -73,6 +73,7 @@ Documentation
|
|||||||
serving/run_on_sky
|
serving/run_on_sky
|
||||||
serving/deploying_with_kserve
|
serving/deploying_with_kserve
|
||||||
serving/deploying_with_triton
|
serving/deploying_with_triton
|
||||||
|
serving/deploying_with_bentoml
|
||||||
serving/deploying_with_docker
|
serving/deploying_with_docker
|
||||||
serving/serving_with_langchain
|
serving/serving_with_langchain
|
||||||
serving/metrics
|
serving/metrics
|
||||||
|
|||||||
8
docs/source/serving/deploying_with_bentoml.rst
Normal file
8
docs/source/serving/deploying_with_bentoml.rst
Normal file
@ -0,0 +1,8 @@
|
|||||||
|
.. _deploying_with_bentoml:
|
||||||
|
|
||||||
|
Deploying with BentoML
|
||||||
|
======================
|
||||||
|
|
||||||
|
`BentoML <https://github.com/bentoml/BentoML>`_ allows you to deploy a large language model (LLM) server with vLLM as the backend, which exposes OpenAI-compatible endpoints. You can serve the model locally or containerize it as an OCI-complicant image and deploy it on Kubernetes.
|
||||||
|
|
||||||
|
For details, see the tutorial `vLLM inference in the BentoML documentation <https://docs.bentoml.com/en/latest/use-cases/large-language-models/vllm.html>`_.
|
||||||
Loading…
x
Reference in New Issue
Block a user