mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced 2026-04-08 03:47:03 +08:00
[GPU Backend] [Doc]: Remove duplicate statements on missing GPU wheels. (#29962)
Signed-off-by: Ioana Ghiban <ioana.ghiban@arm.com>
This commit is contained in:
parent
b78772c433
commit
15b1511a15
@ -5,9 +5,6 @@ vLLM supports AMD GPUs with ROCm 6.3 or above, and torch 2.8.0 and above.
|
||||
!!! tip
|
||||
[Docker](#set-up-using-docker) is the recommended way to use vLLM on ROCm.
|
||||
|
||||
!!! warning
|
||||
There are no pre-built wheels for this device, so you must either use the pre-built Docker image or build vLLM from source.
|
||||
|
||||
# --8<-- [end:installation]
|
||||
# --8<-- [start:requirements]
|
||||
|
||||
|
||||
@ -2,9 +2,6 @@
|
||||
|
||||
vLLM initially supports basic model inference and serving on Intel GPU platform.
|
||||
|
||||
!!! warning
|
||||
There are no pre-built wheels for this device, so you need build vLLM from source. Or you can use pre-built images which are based on vLLM released versions.
|
||||
|
||||
# --8<-- [end:installation]
|
||||
# --8<-- [start:requirements]
|
||||
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user