vllm/docs/usage/README.md
Cyrus Leung 1cb194a018
[Doc] Reorganize user guide (#18661)
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
2025-05-24 07:25:33 -07:00

8 lines
298 B
Markdown

# Using vLLM
vLLM supports the following usage patterns:
- [Inference and Serving](../serving/offline_inference.md): Run a single instance of a model.
- [Deployment](../deployment/docker.md): Scale up model instances for production.
- [Training](../training/rlhf.md): Train or fine-tune a model.