mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced 2025-12-22 05:05:01 +08:00
[Doc] Add instructions on using Podman when SELinux is active (#12136)
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
This commit is contained in:
parent
fead53ba78
commit
1475847a14
@ -42,6 +42,9 @@ DOCKER_BUILDKIT=1 docker build . --target vllm-openai --tag vllm/vllm-openai
|
|||||||
By default vLLM will build for all GPU types for widest distribution. If you are just building for the
|
By default vLLM will build for all GPU types for widest distribution. If you are just building for the
|
||||||
current GPU type the machine is running on, you can add the argument `--build-arg torch_cuda_arch_list=""`
|
current GPU type the machine is running on, you can add the argument `--build-arg torch_cuda_arch_list=""`
|
||||||
for vLLM to find the current GPU type and build for that.
|
for vLLM to find the current GPU type and build for that.
|
||||||
|
|
||||||
|
If you are using Podman instead of Docker, you might need to disable SELinux labeling by
|
||||||
|
adding `--security-opt label=disable` when running `podman build` command to avoid certain [existing issues](https://github.com/containers/buildah/discussions/4184).
|
||||||
```
|
```
|
||||||
|
|
||||||
## Building for Arm64/aarch64
|
## Building for Arm64/aarch64
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user