mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced 2025-12-28 18:45:19 +08:00
[doc] add download model tips (#16389)
Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com>
This commit is contained in:
parent
c70cf0fe06
commit
65e09094c4
@ -160,6 +160,35 @@ If vLLM successfully returns text (for generative models) or hidden states (for
|
||||
Otherwise, please refer to [Adding a New Model](#new-model) for instructions on how to implement your model in vLLM.
|
||||
Alternatively, you can [open an issue on GitHub](https://github.com/vllm-project/vllm/issues/new/choose) to request vLLM support.
|
||||
|
||||
#### Using a proxy
|
||||
|
||||
Here are some tips for loading/downloading models from Hugging Face using a proxy:
|
||||
|
||||
- Set the proxy globally for your session (or set it in the profile file):
|
||||
|
||||
```shell
|
||||
export http_proxy=http://your.proxy.server:port
|
||||
export https_proxy=http://your.proxy.server:port
|
||||
```
|
||||
|
||||
- Set the proxy for just the current command:
|
||||
|
||||
```shell
|
||||
https_proxy=http://your.proxy.server:port huggingface-cli download <model_name>
|
||||
|
||||
# or use vllm cmd directly
|
||||
https_proxy=http://your.proxy.server:port vllm serve <model_name> --disable-log-requests
|
||||
```
|
||||
|
||||
- Set the proxy in Python interpreter:
|
||||
|
||||
```python
|
||||
import os
|
||||
|
||||
os.environ['http_proxy'] = 'http://your.proxy.server:port'
|
||||
os.environ['https_proxy'] = 'http://your.proxy.server:port'
|
||||
```
|
||||
|
||||
### ModelScope
|
||||
|
||||
To use models from [ModelScope](https://www.modelscope.cn) instead of Hugging Face Hub, set an environment variable:
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user