284 Commits

Author SHA1 Message Date
Zhuohan Li
14f9c72bfd
Update Supported Model List (#825) 2023-08-22 11:51:44 -07:00
shunxing1234
ad5f2fe34c
Add support for aquila (#663)
* add aquila

Signed-off-by: ftgreat <ftgreat@163.com>

* fix some bug

Signed-off-by: shunxing1234 <xw747777271@gmail.com>

* delete pdb

Signed-off-by: shunxing1234 <xw747777271@gmail.com>

* fix bugs

Signed-off-by: shunxing1234 <xw747777271@gmail.com>

* fix bugs

Signed-off-by: shunxing1234 <xw747777271@gmail.com>

* delete whitespace

Signed-off-by: shunxing1234 <xw747777271@gmail.com>

* format

* fix order

---------

Signed-off-by: ftgreat <ftgreat@163.com>
Signed-off-by: shunxing1234 <xw747777271@gmail.com>
Co-authored-by: ftgreat <ftgreat@163.com>
2023-08-22 00:13:36 -07:00
zhaoyang-star
4f8584756d
Fix mqa is false case in gpt_bigcode (#806) 2023-08-21 22:22:06 -07:00
Xudong Zhang
65fc1c3127
set default coompute capability according to cuda version (#773) 2023-08-21 16:05:44 -07:00
Daniel
c393af6cd7
[Feature | CI] Added a github action to build wheels (#746) 2023-08-21 16:59:15 +09:00
wangcx18
0c04ce3234
Fix typo in sampling_params.py (#788) 2023-08-18 10:12:46 +09:00
Xinyu Yang
73b3de79ea
explicitly del state (#784) 2023-08-17 12:56:04 -07:00
Abraham-Xu
d1744376ae
Align with huggingface Top K sampling (#753) 2023-08-15 16:44:33 -07:00
Ikko Eltociear Ashimine
805de738f6
Fix typo in tokenizer.py (#750)
conjuction -> conjunction
2023-08-14 22:26:36 -07:00
Uranus
1b151ed181
Fix baichuan doc style (#748) 2023-08-13 20:57:31 -07:00
WanMok
e06f504a76
Supports tokens and arrays of tokens as inputs to the OpenAI completion API (#715) 2023-08-11 12:14:34 -07:00
WRH
462ae5220a
[Fix] unwantted bias in InternLM Model (#740) 2023-08-11 11:40:37 -07:00
Nicolas Basile
66c54aa9c3
Check the max prompt length for the OpenAI completions API (#472) 2023-08-08 17:43:49 -07:00
Jia Guoqing
735ecfff61
add internlm model (#528) 2023-08-08 16:35:06 -07:00
Qing
a57d13cc96
add QWen-7b (#685)
Co-authored-by: wq.chu <wq.chu@tianrang-inc.com>
2023-08-08 13:50:38 -07:00
Dean Leitersdorf
79af7e96a0
[OPTIMIZATION] Optimizes the single_query_cached_kv_attention kernel (#420) 2023-08-04 10:57:29 -07:00
Wen Sun
621980bdc0
fix: incorrect bigcode attention heads num (#676) 2023-08-04 10:35:22 -07:00
Zhuohan Li
aa84c92ef6
Bump up version to 0.1.3 (#657) v0.1.3 2023-08-02 16:46:53 -07:00
Zhuohan Li
f7389f4763
[Doc] Add Baichuan 13B to supported models (#656) 2023-08-02 16:45:12 -07:00
Woosuk Kwon
55fe8a81ec
Refactor scheduler (#658) 2023-08-02 16:42:01 -07:00
YHPeter
e8ddc08ec8
[BUG FIX] upgrade fschat version to 0.2.23 (#650)
Co-authored-by: hao.yu <hao.yu@cn-c017.server.mila.quebec>
2023-08-02 14:05:59 -07:00
Zhuohan Li
1b0bd0fe8a
Add Falcon support (new) (#592) 2023-08-02 14:04:39 -07:00
Lily Liu
20044cab7a
Fix log message in scheduler (#652) 2023-08-02 13:35:10 -07:00
Song
64f23c2900
fix baichuan for different position embedding for 7b and 13b models (#643) 2023-08-01 22:22:51 -07:00
Qing
d4c7755ca8
fix biachuan-7b tp (#598)
Co-authored-by: wq.chu <wq.chu@tianrang-inc.com>
2023-08-01 15:41:36 -07:00
Chaofan Lin
aa39e42c5a
fix doc (#622) 2023-07-31 13:11:57 -07:00
Fang li
953f28cf9a
fix ModuleNotFoundError (#599)
Co-authored-by: fangli <fangli@tencent.com>
2023-07-29 20:52:41 -07:00
Xudong Zhang
c0d00f5be6
[Fix] fix import error of RayWorker (#604) (#605) 2023-07-27 23:37:40 -07:00
Zhuohan Li
58a072be15
[Fix] Add model sequence length into model config (#575) 2023-07-25 23:46:30 -07:00
Zhuohan Li
82ad323dee
[Fix] Add chat completion Example and simplify dependencies (#576) 2023-07-25 23:45:48 -07:00
Zhuohan Li
df5dd3c68e
Add Baichuan-7B to README (#494) 2023-07-25 15:25:12 -07:00
MoeedDar
2d867b55fa
fixed tensor parallel is not defined (#564) 2023-07-25 14:16:51 -07:00
Tao Peng
d7a1c6d614
Fix paged attention testing. (#495)
Signed-off-by: Tao Peng <jiankeng.pt@alibaba-inc.com>
2023-07-24 21:01:56 -07:00
Zhuohan Li
7d5a155e4a
[Fix] Fix GPTBigcoder for distributed execution (#503) 2023-07-24 18:36:33 -07:00
leegohi04517
1dde34e0f8
GPTJConfig has no attribute rotary. (#532) 2023-07-24 11:29:30 -07:00
Zhuohan Li
6fc2a38b11
Add support for LLaMA-2 (#505) 2023-07-20 11:38:27 -07:00
Antoni Baum
c487a221ee
Fix bad assert in initialize_cluster if PG already exists (#526) 2023-07-19 23:17:12 -07:00
Antoni Baum
9925c17940
Ray placement group support (#397) 2023-07-19 22:49:31 -07:00
Ricardo Lu
8c4b2592fb
fix: enable trust-remote-code in api server & benchmark. (#509) 2023-07-19 17:06:15 -07:00
WRH
cf21a9bd5c
support trust_remote_code in benchmark (#518) 2023-07-19 17:02:40 -07:00
Massimiliano Pronesti
16c3e295a8
fix(ray_utils): ignore re-init error (#465) 2023-07-19 17:01:19 -07:00
Song
bda41c70dd
hotfix attn alibi wo head mapping (#496)
Co-authored-by: oliveryuan <oliveryuan@basemind.com>
2023-07-18 11:31:48 -07:00
Lily Liu
453bafb96f
Merge pull request #498 from MoeedDar/main
Fixed old name reference for max_seq_len
2023-07-18 09:22:56 -07:00
MoeedDar
328d231c17 Fixed old name reference for max_seq_len 2023-07-18 16:47:59 +01:00
Lily Liu
b4b195b360
fix max seq len (#489) 2023-07-17 23:20:20 -07:00
codethazine
20b0d88d16
Add support for baichuan (#365) 2023-07-17 13:50:55 -07:00
Zhuohan Li
2bdea7ac11
[Fix] Fix the condition of max_seq_len (#477) 2023-07-17 00:33:48 -04:00
Zhanghao Wu
58df2883cb
[Doc] Add doc for running vLLM on the cloud (#426)
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
2023-07-16 13:37:14 -07:00
Zhangir Azerbayev
6d7d95a70a
Offload port selection to OS (#467) 2023-07-15 23:11:02 -07:00
Zhuohan Li
96853af5a8
Optimize MQA Kernel (#452) 2023-07-14 20:06:40 -04:00