diff --git a/docs/source/index.rst b/docs/source/index.rst index e6d0bc67c003..f2131cd88f41 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -64,6 +64,7 @@ Documentation serving/distributed_serving serving/run_on_sky + serving/deploying_with_triton .. toctree:: :maxdepth: 1 diff --git a/docs/source/serving/deploying_with_triton.rst b/docs/source/serving/deploying_with_triton.rst new file mode 100644 index 000000000000..5ce7c3d03dd2 --- /dev/null +++ b/docs/source/serving/deploying_with_triton.rst @@ -0,0 +1,6 @@ +.. _deploying_with_triton: + +Deploying with NVIDIA Triton +============================ + +The `Triton Inference Server `_ hosts a tutorial demonstrating how to quickly deploy a simple `facebook/opt-125m `_ model using vLLM. Please see `Deploying a vLLM model in Triton `_ for more details.