[doc] add download model tips (#16389)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>
This commit is contained in:
Reid 2025-04-10 15:45:26 +08:00 committed by GitHub
parent c70cf0fe06
commit 65e09094c4
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -160,6 +160,35 @@ If vLLM successfully returns text (for generative models) or hidden states (for
Otherwise, please refer to [Adding a New Model](#new-model) for instructions on how to implement your model in vLLM.
Alternatively, you can [open an issue on GitHub](https://github.com/vllm-project/vllm/issues/new/choose) to request vLLM support.
#### Using a proxy
Here are some tips for loading/downloading models from Hugging Face using a proxy:
- Set the proxy globally for your session (or set it in the profile file):
```shell
export http_proxy=http://your.proxy.server:port
export https_proxy=http://your.proxy.server:port
```
- Set the proxy for just the current command:
```shell
https_proxy=http://your.proxy.server:port huggingface-cli download <model_name>
# or use vllm cmd directly
https_proxy=http://your.proxy.server:port vllm serve <model_name> --disable-log-requests
```
- Set the proxy in Python interpreter:
```python
import os
os.environ['http_proxy'] = 'http://your.proxy.server:port'
os.environ['https_proxy'] = 'http://your.proxy.server:port'
```
### ModelScope
To use models from [ModelScope](https://www.modelscope.cn) instead of Hugging Face Hub, set an environment variable: