[GPU Backend] [Doc]: Remove duplicate statements on missing GPU wheels. (#29962)

Signed-off-by: Ioana Ghiban <ioana.ghiban@arm.com>
This commit is contained in:
ioana ghiban 2025-12-03 13:56:47 +01:00 committed by GitHub
parent b78772c433
commit 15b1511a15
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 0 additions and 6 deletions

View File

@ -5,9 +5,6 @@ vLLM supports AMD GPUs with ROCm 6.3 or above, and torch 2.8.0 and above.
!!! tip
[Docker](#set-up-using-docker) is the recommended way to use vLLM on ROCm.
!!! warning
There are no pre-built wheels for this device, so you must either use the pre-built Docker image or build vLLM from source.
# --8<-- [end:installation]
# --8<-- [start:requirements]

View File

@ -2,9 +2,6 @@
vLLM initially supports basic model inference and serving on Intel GPU platform.
!!! warning
There are no pre-built wheels for this device, so you need build vLLM from source. Or you can use pre-built images which are based on vLLM released versions.
# --8<-- [end:installation]
# --8<-- [start:requirements]