From 3a500cd0b6161933ec7e71cca8b2dfc0982b6f81 Mon Sep 17 00:00:00 2001 From: Reid <61492567+reidliu41@users.noreply.github.com> Date: Fri, 2 May 2025 22:04:49 +0800 Subject: [PATCH] [doc] miss result (#17589) Signed-off-by: reidliu41 Co-authored-by: reidliu41 --- docs/source/features/quantization/fp8.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/features/quantization/fp8.md b/docs/source/features/quantization/fp8.md index f87b2a02cd44..95e105357bd3 100644 --- a/docs/source/features/quantization/fp8.md +++ b/docs/source/features/quantization/fp8.md @@ -106,7 +106,7 @@ Load and run the model in `vllm`: ```python from vllm import LLM model = LLM("./Meta-Llama-3-8B-Instruct-FP8-Dynamic") -model.generate("Hello my name is") +result = model.generate("Hello my name is") print(result[0].outputs[0].text) ```