diff --git a/docs/cli/README.md b/docs/cli/README.md index 5feb316d61a89..f43ce766390ad 100644 --- a/docs/cli/README.md +++ b/docs/cli/README.md @@ -12,19 +12,6 @@ Available Commands: vllm {chat,complete,serve,bench,collect-env,run-batch} ``` -## Table of Contents - -- [serve](#serve) -- [chat](#chat) -- [complete](#complete) -- [bench](#bench) - - [latency](#latency) - - [serve](#serve-1) - - [throughput](#throughput) -- [collect-env](#collect-env) -- [run-batch](#run-batch) -- [More Help](#more-help) - ## serve Start the vLLM OpenAI Compatible API server. diff --git a/docs/deployment/nginx.md b/docs/deployment/nginx.md index 80242919ba5b3..f0ff5c1d0e76d 100644 --- a/docs/deployment/nginx.md +++ b/docs/deployment/nginx.md @@ -5,16 +5,6 @@ title: Using Nginx This document shows how to launch multiple vLLM serving containers and use Nginx to act as a load balancer between the servers. -Table of contents: - -1. [Build Nginx Container][nginxloadbalancer-nginx-build] -2. [Create Simple Nginx Config file][nginxloadbalancer-nginx-conf] -3. [Build vLLM Container][nginxloadbalancer-nginx-vllm-container] -4. [Create Docker Network][nginxloadbalancer-nginx-docker-network] -5. [Launch vLLM Containers][nginxloadbalancer-nginx-launch-container] -6. [Launch Nginx][nginxloadbalancer-nginx-launch-nginx] -7. [Verify That vLLM Servers Are Ready][nginxloadbalancer-nginx-verify-nginx] - [](){ #nginxloadbalancer-nginx-build } ## Build Nginx Container