mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced 2025-12-10 07:34:57 +08:00
[Docs] Add Modal to deployment frameworks (#11907)
This commit is contained in:
parent
9a228348d2
commit
36f5303578
@ -2,6 +2,6 @@
|
||||
|
||||
# BentoML
|
||||
|
||||
[BentoML](https://github.com/bentoml/BentoML) allows you to deploy a large language model (LLM) server with vLLM as the backend, which exposes OpenAI-compatible endpoints. You can serve the model locally or containerize it as an OCI-complicant image and deploy it on Kubernetes.
|
||||
[BentoML](https://github.com/bentoml/BentoML) allows you to deploy a large language model (LLM) server with vLLM as the backend, which exposes OpenAI-compatible endpoints. You can serve the model locally or containerize it as an OCI-compliant image and deploy it on Kubernetes.
|
||||
|
||||
For details, see the tutorial [vLLM inference in the BentoML documentation](https://docs.bentoml.com/en/latest/use-cases/large-language-models/vllm.html).
|
||||
|
||||
@ -8,6 +8,7 @@ cerebrium
|
||||
dstack
|
||||
helm
|
||||
lws
|
||||
modal
|
||||
skypilot
|
||||
triton
|
||||
```
|
||||
|
||||
7
docs/source/deployment/frameworks/modal.md
Normal file
7
docs/source/deployment/frameworks/modal.md
Normal file
@ -0,0 +1,7 @@
|
||||
(deployment-modal)=
|
||||
|
||||
# Modal
|
||||
|
||||
vLLM can be run on cloud GPUs with [Modal](https://modal.com), a serverless computing platform designed for fast auto-scaling.
|
||||
|
||||
For details on how to deploy vLLM on Modal, see [this tutorial in the Modal documentation](https://modal.com/docs/examples/vllm_inference).
|
||||
Loading…
x
Reference in New Issue
Block a user