diff --git a/docs/source/models/supported_models.md b/docs/source/models/supported_models.md index 2ebec2ea968ab..d8e81281a75ef 100644 --- a/docs/source/models/supported_models.md +++ b/docs/source/models/supported_models.md @@ -160,6 +160,35 @@ If vLLM successfully returns text (for generative models) or hidden states (for Otherwise, please refer to [Adding a New Model](#new-model) for instructions on how to implement your model in vLLM. Alternatively, you can [open an issue on GitHub](https://github.com/vllm-project/vllm/issues/new/choose) to request vLLM support. +#### Using a proxy + +Here are some tips for loading/downloading models from Hugging Face using a proxy: + +- Set the proxy globally for your session (or set it in the profile file): + +```shell +export http_proxy=http://your.proxy.server:port +export https_proxy=http://your.proxy.server:port +``` + +- Set the proxy for just the current command: + +```shell +https_proxy=http://your.proxy.server:port huggingface-cli download + +# or use vllm cmd directly +https_proxy=http://your.proxy.server:port vllm serve --disable-log-requests +``` + +- Set the proxy in Python interpreter: + +```python +import os + +os.environ['http_proxy'] = 'http://your.proxy.server:port' +os.environ['https_proxy'] = 'http://your.proxy.server:port' +``` + ### ModelScope To use models from [ModelScope](https://www.modelscope.cn) instead of Hugging Face Hub, set an environment variable: