From 32d669275b1068e3261a47715d30e842817e000b Mon Sep 17 00:00:00 2001 From: cnorman Date: Thu, 27 Mar 2025 17:04:32 -0500 Subject: [PATCH] Correct PowerPC to modern IBM Power (#15635) Signed-off-by: Christy Norman --- docs/source/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/index.md b/docs/source/index.md index 1624d5cf5aae7..402f242679041 100644 --- a/docs/source/index.md +++ b/docs/source/index.md @@ -43,7 +43,7 @@ vLLM is flexible and easy to use with: - Tensor parallelism and pipeline parallelism support for distributed inference - Streaming outputs - OpenAI-compatible API server -- Support NVIDIA GPUs, AMD CPUs and GPUs, Intel CPUs, GaudiĀ® accelerators and GPUs, PowerPC CPUs, TPU, and AWS Trainium and Inferentia Accelerators. +- Support NVIDIA GPUs, AMD CPUs and GPUs, Intel CPUs, GaudiĀ® accelerators and GPUs, IBM Power CPUs, TPU, and AWS Trainium and Inferentia Accelerators. - Prefix caching support - Multi-lora support