645 Commits

Author SHA1 Message Date
Woosuk Kwon
2a18da257c
Announce the second vLLM meetup (#2444) 2024-01-15 14:11:59 -08:00
Simon Mo
6e01e8c1c8
[CI] Add Buildkite (#2355) 2024-01-14 12:37:58 -08:00
Roy
9f659bf07f
[Minor] Optimize cuda graph memory usage (#2437) 2024-01-14 18:40:51 +01:00
Woosuk Kwon
35c4bc20d9
[Minor] Fix err msg (#2431) 2024-01-12 14:02:52 -08:00
陈序
218dc2ccda
Aligning top_p and top_k Sampling (#1885)
* Align top_p and top_k with huggingface

* remove _get_prompt_and_output_tokens

* rename _apply_top_p_top_k

* compare top_p top_k with hf

* fix test errors
2024-01-12 22:51:03 +01:00
Simon
827cbcd37c
Update quickstart.rst (#2369) 2024-01-12 12:56:18 -08:00
Ben
cb7a1c1cbf
Suggest using dtype=half when OOM. 2024-01-12 12:33:29 -08:00
Gary Hui
7878958c0d
Address Phi modeling update 2 (#2428) 2024-01-12 12:16:49 -08:00
Chirag Jain
ce036244c9
Allow setting fastapi root_path argument (#2341) 2024-01-12 10:59:59 -08:00
陈序
48cf1e413c
fix: deque mutated during iteration in abort_seq_group (#2371) 2024-01-12 17:44:18 +01:00
arkohut
97460585d9
Add gradio chatbot for openai webserver (#2307) 2024-01-11 19:45:56 -08:00
Zhuohan Li
f745847ef7
[Minor] Fix the format in quick start guide related to Model Scope (#2425) 2024-01-11 19:44:01 -08:00
Jiaxiang
6549aef245
[DOC] Add additional comments for LLMEngine and AsyncLLMEngine (#1011) 2024-01-11 19:26:49 -08:00
Woosuk Kwon
50376faa7b
Rename phi_1_5 -> phi (#2385) 2024-01-11 16:23:43 -08:00
Yunfeng Bai
4b61c6b669
get_ip(): Fix ipv4 ipv6 dualstack (#2408) 2024-01-10 11:39:58 -08:00
Cade Daniel
79d64c4954
[Speculative decoding 1/9] Optimized rejection sampler (#2336) 2024-01-09 15:38:41 -08:00
KKY
74cd5abdd1
Add baichuan chat template jinjia file (#2390) 2024-01-09 09:13:02 -08:00
Woosuk Kwon
28c3f12104
[Minor] Remove unused code in attention (#2384) 2024-01-08 13:13:08 -08:00
Woosuk Kwon
c884819135
Fix eager mode performance (#2377) 2024-01-08 10:11:06 -08:00
Nadav Shmayovits
05921a9a7a
Changed scheduler to use deques instead of lists (#2290)
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
2024-01-07 09:48:07 -08:00
Iskren Ivov Chernev
d0215a58e7
Ensure metrics are logged regardless of requests (#2347) 2024-01-05 05:24:42 -08:00
Alexandre Payot
937e7b7d7c
Build docker image with shared objects from "build" step (#2237) 2024-01-04 09:35:18 -08:00
ljss
aee8ef661a
Miner fix of type hint (#2340) 2024-01-03 21:27:56 -08:00
Woosuk Kwon
2e0b6e7757
Bump up to v0.2.7 (#2337) v0.2.7 2024-01-03 17:35:56 -08:00
Woosuk Kwon
941767127c
Revert the changes in test_cache (#2335) 2024-01-03 17:32:05 -08:00
Ronen Schaffer
74d8d77626
Remove unused const TIMEOUT_TO_PREVENT_DEADLOCK (#2321) 2024-01-03 15:49:07 -08:00
Zhuohan Li
fd4ea8ef5c
Use NCCL instead of ray for control-plane communication to remove serialization overhead (#2221) 2024-01-03 11:30:22 -08:00
Ronen Schaffer
1066cbd152
Remove deprecated parameter: concurrency_count (#2315) 2024-01-03 09:56:21 -08:00
Woosuk Kwon
6ef00b03a2
Enable CUDA graph for GPTQ & SqueezeLLM (#2318) 2024-01-03 09:52:29 -08:00
Roy
9140561059
[Minor] Fix typo and remove unused code (#2305) 2024-01-02 19:23:15 -08:00
Jee Li
77af974b40
[FIX] Support non-zero CUDA devices in custom kernels (#1959) 2024-01-02 19:09:59 -08:00
Jong-hun Shin
4934d49274
Support GPT-NeoX Models without attention biases (#2301) 2023-12-30 11:42:04 -05:00
Zhuohan Li
358c328d69
[BUGFIX] Fix communication test (#2285) 2023-12-27 17:18:11 -05:00
Zhuohan Li
4aaafdd289
[BUGFIX] Fix the path of test prompts (#2273) 2023-12-26 10:37:21 -08:00
Zhuohan Li
66b108d142
[BUGFIX] Fix API server test (#2270) 2023-12-26 10:37:06 -08:00
Zhuohan Li
e0ff920001
[BUGFIX] Do not return ignored sentences twice in async llm engine (#2258) 2023-12-26 13:41:09 +08:00
blueceiling
face83c7ec
[Docs] Add "About" Heading to README.md (#2260) 2023-12-25 16:37:07 -08:00
Shivam Thakkar
1db83e31a2
[Docs] Update installation instructions to include CUDA 11.8 xFormers (#2246) 2023-12-22 23:20:02 -08:00
Woosuk Kwon
a1b9cb2a34
[BugFix] Fix recovery logic for sequence group (#2186) 2023-12-20 21:52:37 -08:00
Woosuk Kwon
3a4fd5ca59
Disable Ray usage stats collection (#2206) 2023-12-20 21:52:08 -08:00
Ronen Schaffer
c17daa9f89
[Docs] Fix broken links (#2222) 2023-12-20 12:43:42 -08:00
Antoni Baum
bd29cf3d3a
Remove Sampler copy stream (#2209) 2023-12-20 00:04:33 -08:00
Hanzhi Zhou
31bff69151
Make _prepare_sample non-blocking and use pinned memory for input buffers (#2207) 2023-12-19 16:52:46 -08:00
Woosuk Kwon
ba4f826738
[BugFix] Fix weight loading for Mixtral with TP (#2208) 2023-12-19 16:16:11 -08:00
avideci
de60a3fb93
Added DeciLM-7b and DeciLM-7b-instruct (#2062) 2023-12-19 02:29:33 -08:00
Woosuk Kwon
21d5daa4ac
Add warning on CUDA graph memory usage (#2182) 2023-12-18 18:16:17 -08:00
Suhong Moon
290e015c6c
Update Help Text for --gpu-memory-utilization Argument (#2183) 2023-12-18 11:33:24 -08:00
kliuae
1b7c791d60
[ROCm] Fixes for GPTQ on ROCm (#2180) 2023-12-18 10:41:04 -08:00
JohnSaxon
bbe4466fd9
[Minor] Fix typo (#2166)
Co-authored-by: John-Saxon <zhang.xiangxuan@oushu.com>
2023-12-17 23:28:49 -08:00
Harry Mellor
08133c4d1a
Add SSL arguments to API servers (#2109) 2023-12-18 10:56:23 +08:00