mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced 2025-12-10 05:04:58 +08:00
[Bugfix][Doc] Fix Doc Formatting (#6048)
This commit is contained in:
parent
83bdcb6ac3
commit
8e0817c262
@ -1,5 +1,5 @@
|
||||
Frequently Asked Questions
|
||||
========================
|
||||
===========================
|
||||
|
||||
Q: How can I serve multiple models on a single port using the OpenAI API?
|
||||
|
||||
@ -9,4 +9,4 @@ A: Assuming that you're referring to using OpenAI compatible server to serve mul
|
||||
|
||||
Q: Which model to use for offline inference embedding?
|
||||
|
||||
A: If you want to use an embedding model, try: https://huggingface.co/intfloat/e5-mistral-7b-instruct. Instead models, such as Llama-3-8b, Mistral-7B-Instruct-v0.3, are generation models rather than an embedding model
|
||||
A: If you want to use an embedding model, try: https://huggingface.co/intfloat/e5-mistral-7b-instruct. Instead models, such as Llama-3-8b, Mistral-7B-Instruct-v0.3, are generation models rather than an embedding model
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user