Zhuohan Li
|
96853af5a8
|
Optimize MQA Kernel (#452)
|
2023-07-14 20:06:40 -04:00 |
|
Andre Slavescu
|
c894836108
|
[Model] Add support for GPT-J (#226)
Co-authored-by: woWoosuk Kwon <woosuk.kwon@berkeley.edu>
|
2023-07-08 17:55:16 -07:00 |
|
Woosuk Kwon
|
404422f42e
|
[Model] Add support for MPT (#334)
|
2023-07-03 16:47:53 -07:00 |
|
Woosuk Kwon
|
e41f06702c
|
Add support for BLOOM (#331)
|
2023-07-03 13:12:35 -07:00 |
|
Woosuk Kwon
|
0b98ba15c7
|
Change the name to vLLM (#150)
|
2023-06-17 03:07:40 -07:00 |
|
Woosuk Kwon
|
e38074b1e6
|
Support FP32 (#141)
|
2023-06-07 00:40:21 -07:00 |
|
Woosuk Kwon
|
d721168449
|
Improve setup script & Add a guard for bfloat16 kernels (#130)
|
2023-05-27 00:59:32 -07:00 |
|
Woosuk Kwon
|
667ba3995c
|
Add copyright headers to source files adapted from FT (#104)
|
2023-05-14 22:19:19 -07:00 |
|
Woosuk Kwon
|
130d5fd8c7
|
Fix a bug in attention kernel (#68)
|
2023-05-04 02:56:09 -07:00 |
|
Woosuk Kwon
|
e070829ae8
|
Support bfloat16 data type (#54)
|
2023-05-03 14:09:44 -07:00 |
|
Woosuk Kwon
|
436e523bf1
|
Refactor attention kernels (#53)
|
2023-05-03 13:40:13 -07:00 |
|
Woosuk Kwon
|
a96d63c21d
|
Add support for GPT-NeoX (Pythia) (#50)
|
2023-04-28 00:32:10 -07:00 |
|
Woosuk Kwon
|
0f4b32199e
|
Support various block sizes & Change default block size to 16 (#38)
|
2023-04-15 09:03:24 -07:00 |
|
Siyuan (Ryans) Zhuang
|
e3cec88aa5
|
Memcpy kernel for flash attention (#29)
* optimize
* add benchmark
* add assert
* add test
|
2023-04-10 18:22:49 -07:00 |
|
Woosuk Kwon
|
b9926f7f66
|
Support block size 32 (#35)
|
2023-04-09 23:07:18 -07:00 |
|
Woosuk Kwon
|
c267b1a02c
|
Add query stride to multi_query_cached_kv_attention & Add kernel benchmark script (#27)
* Add query stride to multi_query_cached_kv_attention
* Add kernel benchmark script
|
2023-04-08 13:36:09 -07:00 |
|
Woosuk Kwon
|
0f40557af6
|
Implement block copy kernel to optimize beam search (#32)
|
2023-04-07 17:45:07 -07:00 |
|
Siyuan (Ryans) Zhuang
|
21b3671bbc
|
Basic attention kernel that supports cached KV + (multi-)prompts (#24)
|
2023-04-04 20:34:46 -07:00 |
|
Woosuk Kwon
|
897cb2ae28
|
Optimize data movement (#20)
|
2023-04-02 00:30:17 -07:00 |
|
Woosuk Kwon
|
09e9245478
|
Add custom kernel for RMS normalization (#16)
|
2023-04-01 00:51:22 +08:00 |
|
Woosuk Kwon
|
88c0268a18
|
Implement custom kernel for LLaMA rotary embedding (#14)
|
2023-03-30 11:04:21 -07:00 |
|
Woosuk Kwon
|
cfae35b861
|
Add miscellaneous updates (#8)
|
2023-03-13 13:48:38 -07:00 |
|
Woosuk Kwon
|
1a7eb7da61
|
Support beam search & parallel generation (#7)
|
2023-03-10 09:58:21 -08:00 |
|
Woosuk Kwon
|
0deacbce6e
|
Implement single_query_cached_kv_attention kernel (#3)
|
2023-03-01 15:02:19 -08:00 |
|
Woosuk Kwon
|
c413c41cda
|
Add reshape_and_cache op
|
2023-02-18 19:22:57 +00:00 |
|
Woosuk Kwon
|
ffad4e1e03
|
cache_kernel -> cache_kernels
|
2023-02-16 20:05:45 +00:00 |
|
Woosuk Kwon
|
6d2f74efb3
|
Remove redundant fn
|
2023-02-16 09:24:42 +00:00 |
|
Woosuk Kwon
|
6f058c7ba8
|
Implement cache ops
|
2023-02-16 07:47:03 +00:00 |
|