[Doc] Fix typo (#18355)

This commit is contained in:
Elad Segal 2025-05-19 19:05:16 +03:00 committed by GitHub
parent 6781af5608
commit 84ab4feb7e
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -54,7 +54,7 @@ For a model to be compatible with the Transformers backend for vLLM it must:
If the compatible model is: If the compatible model is:
- on the Hugging Face Model Hub, simply set `trust_remote_code=True` for <project:#offline-inference> or `--trust-remode-code` for the <project:#openai-compatible-server>. - on the Hugging Face Model Hub, simply set `trust_remote_code=True` for <project:#offline-inference> or `--trust-remote-code` for the <project:#openai-compatible-server>.
- in a local directory, simply pass directory path to `model=<MODEL_DIR>` for <project:#offline-inference> or `vllm serve <MODEL_DIR>` for the <project:#openai-compatible-server>. - in a local directory, simply pass directory path to `model=<MODEL_DIR>` for <project:#offline-inference> or `vllm serve <MODEL_DIR>` for the <project:#openai-compatible-server>.
This means that, with the Transformers backend for vLLM, new models can be used before they are officially supported in Transformers or vLLM! This means that, with the Transformers backend for vLLM, new models can be used before they are officially supported in Transformers or vLLM!