16 Commits

Author SHA1 Message Date
Zhuohan Li
eedb46bf03
Rename servers and change port numbers to reduce confusion (#149) 2023-06-17 00:13:02 +08:00
Woosuk Kwon
311490a720
Add script for benchmarking serving throughput (#145) 2023-06-14 19:55:38 -07:00
Zhuohan Li
4298374265
Add docstrings for LLMServer and related classes and examples (#142) 2023-06-07 18:25:20 +08:00
Woosuk Kwon
e38074b1e6
Support FP32 (#141) 2023-06-07 00:40:21 -07:00
Woosuk Kwon
376725ce74
[PyPI] Packaging for PyPI distribution (#140) 2023-06-05 20:03:14 -07:00
Zhuohan Li
1a956e136b
Fix various issues of async servers (#135) 2023-06-05 23:44:50 +08:00
Woosuk Kwon
8274ca23ac
Add docstrings for LLM (#137) 2023-06-04 12:52:41 -07:00
Woosuk Kwon
211318d44a
Add throughput benchmarking script (#133) 2023-05-28 03:20:05 -07:00
Woosuk Kwon
337871c6fd
Enable LLaMA fast tokenizer (#132) 2023-05-28 02:51:42 -07:00
Zhuohan Li
057daef778
OpenAI Compatible Frontend (#116) 2023-05-23 21:39:50 -07:00
Woosuk Kwon
e86717833d
Incrementally decode output tokens (#121) 2023-05-23 20:46:32 -07:00
Woosuk Kwon
aedba6d5ec
Print warnings/errors for large swap space (#123) 2023-05-23 18:22:26 -07:00
Woosuk Kwon
a283ec2eec
Add contributing guideline and mypy config (#122) 2023-05-23 17:58:51 -07:00
Woosuk Kwon
655a5e48df
Introduce LLM class for offline inference (#115) 2023-05-21 17:04:18 -07:00
Woosuk Kwon
f746ced08d
Implement stop strings and best_of (#114) 2023-05-21 11:18:00 -07:00
Woosuk Kwon
c3442c1f6f
Refactor system architecture (#109) 2023-05-20 13:06:59 -07:00