Simon Mo
|
1e4277d2d1
|
lint: format all python file instead of just source code (#2567)
|
2024-01-23 15:53:06 -08:00 |
|
Antoni Baum
|
9b945daaf1
|
[Experimental] Add multi-LoRA support (#1804)
Co-authored-by: Chen Shen <scv119@gmail.com>
Co-authored-by: Shreyas Krishnaswamy <shrekris@anyscale.com>
Co-authored-by: Avnish Narayan <avnish@anyscale.com>
|
2024-01-23 15:26:37 -08:00 |
|
Erfan Al-Hossami
|
9c1352eb57
|
[Feature] Simple API token authentication and pluggable middlewares (#1106)
|
2024-01-23 15:13:00 -08:00 |
|
Jason Zhu
|
7a0b011dd5
|
Add a 1-line docstring to explain why calling context_attention_fwd twice in test_prefix_prefill.py (#2553)
|
2024-01-22 14:47:25 -08:00 |
|
Harry Mellor
|
63e835cbcc
|
Fix progress bar and allow HTTPS in benchmark_serving.py (#2552)
|
2024-01-22 14:40:31 -08:00 |
|
Junyang Lin
|
94b5edeb53
|
Add qwen2 (#2495)
|
2024-01-22 14:34:21 -08:00 |
|
Philipp Moritz
|
ab7e6006d6
|
Fix https://github.com/vllm-project/vllm/issues/2540 (#2545)
|
2024-01-22 19:02:38 +01:00 |
|
Cade Daniel
|
18bfcdd05c
|
[Speculative decoding 2/9] Multi-step worker for draft model (#2424)
|
2024-01-21 16:31:47 -08:00 |
|
Jannis Schönleber
|
71d63ed72e
|
migrate pydantic from v1 to v2 (#2531)
|
2024-01-21 16:05:56 -08:00 |
|
Nick Hill
|
d75c40734a
|
[Fix] Keep scheduler.running as deque (#2523)
|
2024-01-20 22:36:09 -08:00 |
|
Junda Chen
|
5b23c3f26f
|
Add group as an argument in broadcast ops (#2522)
|
2024-01-20 16:00:26 -08:00 |
|
Simon Mo
|
00efdc84ba
|
Add benchmark serving to CI (#2505)
|
2024-01-19 20:20:19 -08:00 |
|
Roy
|
91a61da9b1
|
[Bugfix] fix load local safetensors model (#2512)
|
2024-01-19 16:26:16 -08:00 |
|
Zhuohan Li
|
ef9b636e2d
|
Simplify broadcast logic for control messages (#2501)
|
2024-01-19 11:23:30 -08:00 |
|
Harry Mellor
|
2709c0009a
|
Support OpenAI API server in benchmark_serving.py (#2172)
|
2024-01-18 20:34:08 -08:00 |
|
Simon Mo
|
dd7e8f5f64
|
refactor complemention api for readability (#2499)
|
2024-01-18 16:45:14 -08:00 |
|
ljss
|
d2a68364c4
|
[BugFix] Fix abort_seq_group (#2463)
|
2024-01-18 15:10:42 -08:00 |
|
Nikola Borisov
|
7e1081139d
|
Don't download both safetensor and bin files. (#2480)
|
2024-01-18 11:05:53 -08:00 |
|
Liangfu Chen
|
18473cf498
|
[Neuron] Add an option to build with neuron (#2065)
|
2024-01-18 10:58:50 -08:00 |
|
zspo
|
4df417d059
|
fix: fix some args desc (#2487)
|
2024-01-18 09:41:44 -08:00 |
|
Jason Zhu
|
5d80a9178b
|
Minor fix in prefill cache example (#2494)
|
2024-01-18 09:40:34 -08:00 |
|
YingchaoX
|
8a25d3a71a
|
fix stablelm.py tensor-parallel-size bug (#2482)
|
2024-01-18 09:39:46 -08:00 |
|
shiyi.c_98
|
d10f8e1d43
|
[Experimental] Prefix Caching Support (#1669)
Co-authored-by: DouHappy <2278958187@qq.com>
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
|
2024-01-17 16:32:10 -08:00 |
|
FlorianJoncour
|
14cc317ba4
|
OpenAI Server refactoring (#2360)
|
2024-01-16 21:33:14 -08:00 |
|
Hyunsung Lee
|
e1957c6ebd
|
Add StableLM3B model (#2372)
|
2024-01-16 20:32:40 -08:00 |
|
Simon Mo
|
8cd5a992bf
|
ci: retry on build failure as well (#2457)
|
2024-01-16 12:51:04 -08:00 |
|
Simon Mo
|
947f0b23cc
|
CI: make sure benchmark script exit on error (#2449)
|
2024-01-16 09:50:13 -08:00 |
|
Chenhui Zhang
|
f780504d12
|
fix weigit loading for GQA with TP (#2379)
|
2024-01-15 15:43:59 -08:00 |
|
Simon Mo
|
bfc072addf
|
Allow buildkite to retry build on agent lost (#2446)
|
2024-01-15 15:43:15 -08:00 |
|
Woosuk Kwon
|
2a18da257c
|
Announce the second vLLM meetup (#2444)
|
2024-01-15 14:11:59 -08:00 |
|
Simon Mo
|
6e01e8c1c8
|
[CI] Add Buildkite (#2355)
|
2024-01-14 12:37:58 -08:00 |
|
Roy
|
9f659bf07f
|
[Minor] Optimize cuda graph memory usage (#2437)
|
2024-01-14 18:40:51 +01:00 |
|
Woosuk Kwon
|
35c4bc20d9
|
[Minor] Fix err msg (#2431)
|
2024-01-12 14:02:52 -08:00 |
|
陈序
|
218dc2ccda
|
Aligning top_p and top_k Sampling (#1885)
* Align top_p and top_k with huggingface
* remove _get_prompt_and_output_tokens
* rename _apply_top_p_top_k
* compare top_p top_k with hf
* fix test errors
|
2024-01-12 22:51:03 +01:00 |
|
Simon
|
827cbcd37c
|
Update quickstart.rst (#2369)
|
2024-01-12 12:56:18 -08:00 |
|
Ben
|
cb7a1c1cbf
|
Suggest using dtype=half when OOM.
|
2024-01-12 12:33:29 -08:00 |
|
Gary Hui
|
7878958c0d
|
Address Phi modeling update 2 (#2428)
|
2024-01-12 12:16:49 -08:00 |
|
Chirag Jain
|
ce036244c9
|
Allow setting fastapi root_path argument (#2341)
|
2024-01-12 10:59:59 -08:00 |
|
陈序
|
48cf1e413c
|
fix: deque mutated during iteration in abort_seq_group (#2371)
|
2024-01-12 17:44:18 +01:00 |
|
arkohut
|
97460585d9
|
Add gradio chatbot for openai webserver (#2307)
|
2024-01-11 19:45:56 -08:00 |
|
Zhuohan Li
|
f745847ef7
|
[Minor] Fix the format in quick start guide related to Model Scope (#2425)
|
2024-01-11 19:44:01 -08:00 |
|
Jiaxiang
|
6549aef245
|
[DOC] Add additional comments for LLMEngine and AsyncLLMEngine (#1011)
|
2024-01-11 19:26:49 -08:00 |
|
Woosuk Kwon
|
50376faa7b
|
Rename phi_1_5 -> phi (#2385)
|
2024-01-11 16:23:43 -08:00 |
|
Yunfeng Bai
|
4b61c6b669
|
get_ip(): Fix ipv4 ipv6 dualstack (#2408)
|
2024-01-10 11:39:58 -08:00 |
|
Cade Daniel
|
79d64c4954
|
[Speculative decoding 1/9] Optimized rejection sampler (#2336)
|
2024-01-09 15:38:41 -08:00 |
|
KKY
|
74cd5abdd1
|
Add baichuan chat template jinjia file (#2390)
|
2024-01-09 09:13:02 -08:00 |
|
Woosuk Kwon
|
28c3f12104
|
[Minor] Remove unused code in attention (#2384)
|
2024-01-08 13:13:08 -08:00 |
|
Woosuk Kwon
|
c884819135
|
Fix eager mode performance (#2377)
|
2024-01-08 10:11:06 -08:00 |
|
Nadav Shmayovits
|
05921a9a7a
|
Changed scheduler to use deques instead of lists (#2290)
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
|
2024-01-07 09:48:07 -08:00 |
|
Iskren Ivov Chernev
|
d0215a58e7
|
Ensure metrics are logged regardless of requests (#2347)
|
2024-01-05 05:24:42 -08:00 |
|