From 30498f2a6592263a0e7e42080f8790ea72d9b122 Mon Sep 17 00:00:00 2001 From: Rakesh Asapanna <45640029+rozeappletree@users.noreply.github.com> Date: Sat, 13 Sep 2025 12:45:41 +0530 Subject: [PATCH] [Doc]: Remove 404 hyperlinks (#24785) Signed-off-by: Rakesh Asapanna <45640029+rozeappletree@users.noreply.github.com> --- docs/examples/README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/examples/README.md b/docs/examples/README.md index 3cf93027f4209..94f5efc92f386 100644 --- a/docs/examples/README.md +++ b/docs/examples/README.md @@ -2,6 +2,6 @@ vLLM's examples are split into three categories: -- If you are using vLLM from within Python code, see [Offline Inference](./offline_inference) -- If you are using vLLM from an HTTP application or client, see [Online Serving](./online_serving) -- For examples of using some of vLLM's advanced features (e.g. LMCache or Tensorizer) which are not specific to either of the above use cases, see [Others](./others) +- If you are using vLLM from within Python code, see the *Offline Inference* section. +- If you are using vLLM from an HTTP application or client, see the *Online Serving* section. +- For examples of using some of vLLM's advanced features (e.g. LMCache or Tensorizer) which are not specific to either of the above use cases, see the *Others* section.