Zhuohan Li
|
eedb46bf03
|
Rename servers and change port numbers to reduce confusion (#149)
|
2023-06-17 00:13:02 +08:00 |
|
Zhuohan Li
|
5020e1e80c
|
Non-streaming simple fastapi server (#144)
|
2023-06-10 10:43:07 -07:00 |
|
Zhuohan Li
|
4298374265
|
Add docstrings for LLMServer and related classes and examples (#142)
|
2023-06-07 18:25:20 +08:00 |
|
Woosuk Kwon
|
e38074b1e6
|
Support FP32 (#141)
|
2023-06-07 00:40:21 -07:00 |
|
Woosuk Kwon
|
376725ce74
|
[PyPI] Packaging for PyPI distribution (#140)
|
2023-06-05 20:03:14 -07:00 |
|
Zhuohan Li
|
1a956e136b
|
Fix various issues of async servers (#135)
|
2023-06-05 23:44:50 +08:00 |
|
Woosuk Kwon
|
8274ca23ac
|
Add docstrings for LLM (#137)
|
2023-06-04 12:52:41 -07:00 |
|
Woosuk Kwon
|
62ec38ea41
|
Document supported models (#127)
|
2023-06-02 22:35:17 -07:00 |
|
Woosuk Kwon
|
211318d44a
|
Add throughput benchmarking script (#133)
|
2023-05-28 03:20:05 -07:00 |
|
Woosuk Kwon
|
4a151dd453
|
Add activation registry (#126)
|
2023-05-25 00:09:07 -07:00 |
|
Zhuohan Li
|
057daef778
|
OpenAI Compatible Frontend (#116)
|
2023-05-23 21:39:50 -07:00 |
|
Woosuk Kwon
|
3f942acfe1
|
Fix latency benchmark script (#118)
|
2023-05-22 17:03:40 -07:00 |
|
Woosuk Kwon
|
655a5e48df
|
Introduce LLM class for offline inference (#115)
|
2023-05-21 17:04:18 -07:00 |
|
Woosuk Kwon
|
f746ced08d
|
Implement stop strings and best_of (#114)
|
2023-05-21 11:18:00 -07:00 |
|
Woosuk Kwon
|
c3442c1f6f
|
Refactor system architecture (#109)
|
2023-05-20 13:06:59 -07:00 |
|