Woosuk Kwon
|
c9d5b6d4a8
|
Replace FlashAttention with xformers (#70)
|
2023-05-05 02:01:08 -07:00 |
|
Woosuk Kwon
|
189ae23133
|
Use dtype from model config & Add Dolly V2 (#63)
|
2023-05-04 03:05:37 -07:00 |
|
Woosuk Kwon
|
e548c1488a
|
Add support for GPT-2 (#60)
|
2023-05-04 02:59:56 -07:00 |
|
Woosuk Kwon
|
e070829ae8
|
Support bfloat16 data type (#54)
|
2023-05-03 14:09:44 -07:00 |
|
Zhuohan Li
|
27f1410d06
|
New weight loader without np copy (#52)
|
2023-05-03 15:32:04 +08:00 |
|
Woosuk Kwon
|
a96d63c21d
|
Add support for GPT-NeoX (Pythia) (#50)
|
2023-04-28 00:32:10 -07:00 |
|
Woosuk Kwon
|
ee88a7e5f3
|
Add an option to use dummy model weights (#33)
|
2023-04-08 23:36:12 -07:00 |
|
Woosuk Kwon
|
0f40557af6
|
Implement block copy kernel to optimize beam search (#32)
|
2023-04-07 17:45:07 -07:00 |
|
Woosuk Kwon
|
897cb2ae28
|
Optimize data movement (#20)
|
2023-04-02 00:30:17 -07:00 |
|
Zhuohan Li
|
1f01a18d39
|
Merge QKV into one linear layer (#15)
|
2023-04-02 00:23:29 -07:00 |
|
Woosuk Kwon
|
a90c97d727
|
Use FP32 for log probabilities (#19)
|
2023-03-31 23:33:43 -07:00 |
|
Woosuk Kwon
|
09e9245478
|
Add custom kernel for RMS normalization (#16)
|
2023-04-01 00:51:22 +08:00 |
|
Woosuk Kwon
|
88c0268a18
|
Implement custom kernel for LLaMA rotary embedding (#14)
|
2023-03-30 11:04:21 -07:00 |
|
Woosuk Kwon
|
80a2f812f1
|
Implement LLaMA (#9)
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
|
2023-03-30 12:25:32 +08:00 |
|
Zhuohan Li
|
721fa3df15
|
FastAPI-based working frontend (#10)
|
2023-03-29 14:48:56 +08:00 |
|
Woosuk Kwon
|
d359cda5fa
|
Minor
|
2023-03-26 08:00:39 +00:00 |
|
Zhuohan Li
|
2f49f15585
|
Support tensor parallel (#2)
|
2023-03-21 13:45:42 -07:00 |
|
Woosuk Kwon
|
cfae35b861
|
Add miscellaneous updates (#8)
|
2023-03-13 13:48:38 -07:00 |
|
Woosuk Kwon
|
e9d3f2ff77
|
Add memory analyzer & utomatically configure KV cache size (#6)
|
2023-03-11 23:23:14 -08:00 |
|
Woosuk Kwon
|
1a7eb7da61
|
Support beam search & parallel generation (#7)
|
2023-03-10 09:58:21 -08:00 |
|
Woosuk Kwon
|
04e5acc08e
|
Fix a bug in 1D input shape (#5)
|
2023-03-06 10:05:27 -08:00 |
|
Woosuk Kwon
|
3e9f991d6a
|
Use FlashAttention for multi_query_kv_attention (#4)
|
2023-03-01 21:13:08 -08:00 |
|
Woosuk Kwon
|
0deacbce6e
|
Implement single_query_cached_kv_attention kernel (#3)
|
2023-03-01 15:02:19 -08:00 |
|
Woosuk Kwon
|
cbf8779afa
|
Fix a bug in tying OPT embeddings (#1)
|
2023-02-24 16:29:36 -08:00 |
|
Woosuk Kwon
|
762fd1c3fa
|
Refactor and annotate types for attention
|
2023-02-24 08:58:46 +00:00 |
|
Woosuk Kwon
|
7f22f90e8c
|
Remove xformers
|
2023-02-24 08:36:16 +00:00 |
|
Woosuk Kwon
|
932844f1cd
|
Fix attention
|
2023-02-23 23:02:25 +00:00 |
|
Woosuk Kwon
|
ba84b8728a
|
Fix attention
|
2023-02-23 22:29:46 +00:00 |
|
Woosuk Kwon
|
87e0bcd426
|
Fix attention
|
2023-02-23 21:32:02 +00:00 |
|
Woosuk Kwon
|
1ce1333573
|
Set default dtype to half
|
2023-02-23 21:31:39 +00:00 |
|
Woosuk Kwon
|
de0fabbc5c
|
Fix sampler
|
2023-02-23 20:30:12 +00:00 |
|
Woosuk Kwon
|
fdd0f2f472
|
Minor
|
2023-02-23 20:23:47 +00:00 |
|
Woosuk Kwon
|
7f985166f7
|
Consider pempty tensor
|
2023-02-23 20:20:33 +00:00 |
|
Woosuk Kwon
|
86f9eb6d39
|
Fix typo
|
2023-02-23 20:19:24 +00:00 |
|
Woosuk Kwon
|
d4bc1a4d24
|
Add unoptimized OPT Attention
|
2023-02-23 09:31:55 +00:00 |
|
Woosuk Kwon
|
b56b6ca0d6
|
Add greedy sampler
|
2023-02-23 09:26:09 +00:00 |
|
Woosuk Kwon
|
343cea3dbc
|
Add seq_ids to input metadata
|
2023-02-23 09:25:01 +00:00 |
|
Woosuk Kwon
|
4b1ac23f53
|
Fix slot mapping
|
2023-02-23 00:10:07 +00:00 |
|
Woosuk Kwon
|
7b6844e590
|
Add input metadata
|
2023-02-22 19:01:20 +00:00 |
|
Woosuk Kwon
|
608f74ffe5
|
Minor
|
2023-02-22 18:08:25 +00:00 |
|
Woosuk Kwon
|
709a69176e
|
Move worker/models -> models
|
2023-02-22 18:03:48 +00:00 |
|