Zhuohan Li
|
4aaafdd289
|
[BUGFIX] Fix the path of test prompts (#2273)
|
2023-12-26 10:37:21 -08:00 |
|
Zhuohan Li
|
66b108d142
|
[BUGFIX] Fix API server test (#2270)
|
2023-12-26 10:37:06 -08:00 |
|
Zhuohan Li
|
e0ff920001
|
[BUGFIX] Do not return ignored sentences twice in async llm engine (#2258)
|
2023-12-26 13:41:09 +08:00 |
|
blueceiling
|
face83c7ec
|
[Docs] Add "About" Heading to README.md (#2260)
|
2023-12-25 16:37:07 -08:00 |
|
Shivam Thakkar
|
1db83e31a2
|
[Docs] Update installation instructions to include CUDA 11.8 xFormers (#2246)
|
2023-12-22 23:20:02 -08:00 |
|
Woosuk Kwon
|
a1b9cb2a34
|
[BugFix] Fix recovery logic for sequence group (#2186)
|
2023-12-20 21:52:37 -08:00 |
|
Woosuk Kwon
|
3a4fd5ca59
|
Disable Ray usage stats collection (#2206)
|
2023-12-20 21:52:08 -08:00 |
|
Ronen Schaffer
|
c17daa9f89
|
[Docs] Fix broken links (#2222)
|
2023-12-20 12:43:42 -08:00 |
|
Antoni Baum
|
bd29cf3d3a
|
Remove Sampler copy stream (#2209)
|
2023-12-20 00:04:33 -08:00 |
|
Hanzhi Zhou
|
31bff69151
|
Make _prepare_sample non-blocking and use pinned memory for input buffers (#2207)
|
2023-12-19 16:52:46 -08:00 |
|
Woosuk Kwon
|
ba4f826738
|
[BugFix] Fix weight loading for Mixtral with TP (#2208)
|
2023-12-19 16:16:11 -08:00 |
|
avideci
|
de60a3fb93
|
Added DeciLM-7b and DeciLM-7b-instruct (#2062)
|
2023-12-19 02:29:33 -08:00 |
|
Woosuk Kwon
|
21d5daa4ac
|
Add warning on CUDA graph memory usage (#2182)
|
2023-12-18 18:16:17 -08:00 |
|
Suhong Moon
|
290e015c6c
|
Update Help Text for --gpu-memory-utilization Argument (#2183)
|
2023-12-18 11:33:24 -08:00 |
|
kliuae
|
1b7c791d60
|
[ROCm] Fixes for GPTQ on ROCm (#2180)
|
2023-12-18 10:41:04 -08:00 |
|
JohnSaxon
|
bbe4466fd9
|
[Minor] Fix typo (#2166)
Co-authored-by: John-Saxon <zhang.xiangxuan@oushu.com>
|
2023-12-17 23:28:49 -08:00 |
|
Harry Mellor
|
08133c4d1a
|
Add SSL arguments to API servers (#2109)
|
2023-12-18 10:56:23 +08:00 |
|
Woosuk Kwon
|
76a7983b23
|
[BugFix] Fix RoPE kernel on long sequences(#2164)
|
2023-12-17 17:09:10 -08:00 |
|
Woosuk Kwon
|
8041b7305e
|
[BugFix] Raise error when max_model_len is larger than KV cache (#2163)
|
2023-12-17 17:08:23 -08:00 |
|
Suhong Moon
|
3ec8c25cd0
|
[Docs] Update documentation for gpu-memory-utilization option (#2162)
|
2023-12-17 10:51:57 -08:00 |
|
Woosuk Kwon
|
671af2b1c0
|
Bump up to v0.2.6 (#2157)
v0.2.6
|
2023-12-17 10:34:56 -08:00 |
|
Woosuk Kwon
|
6f41f0e377
|
Disable CUDA graph for SqueezeLLM (#2161)
|
2023-12-17 10:24:25 -08:00 |
|
Woosuk Kwon
|
2c9b638065
|
[Minor] Fix a typo in .pt weight support (#2160)
|
2023-12-17 10:12:44 -08:00 |
|
Antoni Baum
|
a7347d9a6d
|
Make sampler less blocking (#1889)
|
2023-12-17 23:03:49 +08:00 |
|
Woosuk Kwon
|
f8c688d746
|
[Minor] Add Phi 2 to supported models (#2159)
|
2023-12-17 02:54:57 -08:00 |
|
Woosuk Kwon
|
c9fadda543
|
[Minor] Fix xformers version (#2158)
|
2023-12-17 02:28:02 -08:00 |
|
Woosuk Kwon
|
30fb0956df
|
[Minor] Add more detailed explanation on quantization argument (#2145)
|
2023-12-17 01:56:16 -08:00 |
|
Woosuk Kwon
|
3a765bd5e1
|
Temporarily enforce eager mode for GPTQ models (#2154)
|
2023-12-17 01:51:12 -08:00 |
|
Woosuk Kwon
|
26c52a5ea6
|
[Docs] Add CUDA graph support to docs (#2148)
|
2023-12-17 01:49:20 -08:00 |
|
Woosuk Kwon
|
c3372e87be
|
Remove dependency on CuPy (#2152)
|
2023-12-17 01:49:07 -08:00 |
|
Woosuk Kwon
|
b0a1d667b0
|
Pin PyTorch & xformers versions (#2155)
|
2023-12-17 01:46:54 -08:00 |
|
Woosuk Kwon
|
e1d5402238
|
Fix all-reduce memory usage (#2151)
|
2023-12-17 01:44:45 -08:00 |
|
Woosuk Kwon
|
3d1cfbfc74
|
[Minor] Delete Llama tokenizer warnings (#2146)
|
2023-12-16 22:05:18 -08:00 |
|
Woosuk Kwon
|
37ca558103
|
Optimize model execution with CUDA graph (#1926)
Co-authored-by: Chen Shen <scv119@gmail.com>
Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>
|
2023-12-16 21:12:08 -08:00 |
|
Roy
|
eed74a558f
|
Simplify weight loading logic (#2133)
|
2023-12-16 12:41:23 -08:00 |
|
Woosuk Kwon
|
2acd76f346
|
[ROCm] Temporarily remove GPTQ ROCm support (#2138)
|
2023-12-15 17:13:58 -08:00 |
|
Woosuk Kwon
|
b81a6a6bb3
|
[Docs] Add supported quantization methods to docs (#2135)
|
2023-12-15 13:29:22 -08:00 |
|
CHU Tianxiang
|
0fbfc4b81b
|
Add GPTQ support (#916)
|
2023-12-15 03:04:22 -08:00 |
|
Yunfeng Bai
|
c06170cc8e
|
Add a flag to include stop string in output text (#1976)
|
2023-12-15 00:45:58 -08:00 |
|
Mingcan Xiang
|
614856da25
|
Avoid multiple redefinition (#1817)
|
2023-12-14 09:35:58 -08:00 |
|
TJian
|
05bdf4eaf3
|
Fix Dockerfile.rocm (#2101)
Co-authored-by: miloice <jeffaw99@hotmail.com>
|
2023-12-14 00:45:58 -08:00 |
|
mezuzza
|
6774bd50b0
|
Fix typing in AsyncLLMEngine & add toml to requirements-dev (#2100)
|
2023-12-14 00:19:41 -08:00 |
|
Woosuk Kwon
|
31c1f3255e
|
Bump up to v0.2.5 (#2095)
v0.2.5
|
2023-12-13 23:56:15 -08:00 |
|
Antoni Baum
|
21d93c140d
|
Optimize Mixtral with expert parallelism (#2090)
|
2023-12-13 23:55:07 -08:00 |
|
Woosuk Kwon
|
f1c8520146
|
[BugFix] Fix input positions for long context with sliding window (#2088)
|
2023-12-13 12:28:13 -08:00 |
|
Woosuk Kwon
|
096827c284
|
[Docs] Add notes on ROCm-supported models (#2087)
|
2023-12-13 09:45:34 -08:00 |
|
Woosuk Kwon
|
6565d9e33e
|
Update installation instruction for vLLM + CUDA 11.8 (#2086)
|
2023-12-13 09:25:59 -08:00 |
|
TJian
|
f375ec8440
|
[ROCm] Upgrade xformers version for ROCm & update doc (#2079)
Co-authored-by: miloice <jeffaw99@hotmail.com>
|
2023-12-13 00:56:05 -08:00 |
|
Woosuk Kwon
|
518369d78c
|
Implement lazy model loader (#2044)
|
2023-12-12 22:21:45 -08:00 |
|
Woosuk Kwon
|
30bad5c492
|
Fix peak memory profiling (#2031)
|
2023-12-12 22:01:53 -08:00 |
|