From 6368e777a8ead7fb62054d3779c6237361ec0d86 Mon Sep 17 00:00:00 2001 From: ldwang Date: Fri, 13 Oct 2023 03:11:16 +0800 Subject: [PATCH] Add Aquila2 to README (#1331) Signed-off-by: ldwang Co-authored-by: ldwang --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index eaa515148bf6..5ddb2a48ca83 100644 --- a/README.md +++ b/README.md @@ -46,7 +46,7 @@ vLLM is flexible and easy to use with: vLLM seamlessly supports many Hugging Face models, including the following architectures: -- Aquila (`BAAI/Aquila-7B`, `BAAI/AquilaChat-7B`, etc.) +- Aquila & Aquila2 (`BAAI/AquilaChat2-7B`, `BAAI/AquilaChat2-34B`, `BAAI/Aquila-7B`, `BAAI/AquilaChat-7B`, etc.) - Baichuan (`baichuan-inc/Baichuan-7B`, `baichuan-inc/Baichuan-13B-Chat`, etc.) - BLOOM (`bigscience/bloom`, `bigscience/bloomz`, etc.) - Falcon (`tiiuae/falcon-7b`, `tiiuae/falcon-40b`, `tiiuae/falcon-rw-7b`, etc.)