Zhuohan Li
|
e90fc21f2e
|
[Hardware][Neuron] Refactor neuron support (#3471)
|
2024-03-22 01:22:17 +00:00 |
|
Roy
|
f1c0fc3919
|
Migrate logits computation and gather to model_runner (#3233)
|
2024-03-20 23:25:01 +00:00 |
|
SangBin Cho
|
6e435de766
|
[1/n][Chunked Prefill] Refactor input query shapes (#3236)
|
2024-03-20 14:46:05 -07:00 |
|
Antoni Baum
|
426ec4ec67
|
[1/n] Triton sampling kernel (#3186)
Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>
|
2024-03-20 14:45:08 -07:00 |
|
Woosuk Kwon
|
5ee14494e4
|
[Misc] Remove cache stream and cache events (#3461)
|
2024-03-20 00:38:53 -07:00 |
|
Antoni Baum
|
49eedea373
|
[Core] Zero-copy asdict for InputMetadata (#3475)
|
2024-03-18 22:56:40 +00:00 |
|
Woosuk Kwon
|
abfc4f3387
|
[Misc] Use dataclass for InputMetadata (#3452)
Co-authored-by: youkaichao <youkaichao@126.com>
|
2024-03-17 10:02:46 +00:00 |
|
Zhuohan Li
|
2f8844ba08
|
Re-enable the 80 char line width limit (#3305)
|
2024-03-10 19:49:14 -07:00 |
|
Cade Daniel
|
8437bae6ef
|
[Speculative decoding 3/9] Worker which speculates, scores, and applies rejection sampling (#3103)
|
2024-03-08 23:32:46 -08:00 |
|
ElizaWszola
|
b35cc93420
|
Fix auto prefix bug (#3239)
|
2024-03-07 16:37:28 -08:00 |
|
Michael Goin
|
385da2dae2
|
Measure model memory usage (#3120)
|
2024-03-07 11:42:42 -08:00 |
|
Hongxia Yang
|
05af6da8d9
|
[ROCm] enable cupy in order to enable cudagraph mode for AMD GPUs (#3123)
Co-authored-by: lcskrishna <lollachaitanya@gmail.com>
|
2024-03-04 18:14:53 -08:00 |
|
Antoni Baum
|
22de45235c
|
Push logprob generation to LLMEngine (#3065)
Co-authored-by: Avnish Narayan <avnish@anyscale.com>
|
2024-03-04 19:54:06 +00:00 |
|
Sage Moore
|
ce4f5a29fb
|
Add Automatic Prefix Caching (#2762)
Co-authored-by: ElizaWszola <eliza@neuralmagic.com>
Co-authored-by: Michael Goin <michael@neuralmagic.com>
|
2024-03-02 00:50:01 -08:00 |
|
Liangfu Chen
|
3b7178cfa4
|
[Neuron] Support inference with transformers-neuronx (#2569)
|
2024-02-28 09:34:34 -08:00 |
|
Nick Hill
|
7d2dcce175
|
Support per-request seed (#2514)
|
2024-02-21 11:47:00 -08:00 |
|
Zhuohan Li
|
537c9755a7
|
[Minor] Small fix to make distributed init logic in worker looks cleaner (#2905)
|
2024-02-18 14:39:00 -08:00 |
|
Woosuk Kwon
|
25e86b6a61
|
Don't use cupy NCCL for AMD backends (#2855)
|
2024-02-14 12:30:44 -08:00 |
|
Woosuk Kwon
|
7e45107f51
|
[Fix] Fix memory profiling when GPU is used by multiple processes (#2863)
|
2024-02-13 19:52:34 -08:00 |
|
Terry
|
2a543d6efe
|
Add LoRA support for Mixtral (#2831)
* add mixtral lora support
* formatting
* fix incorrectly ported logic
* polish tests
* minor fixes and refactoring
* minor fixes
* formatting
* rename and remove redundant logic
* refactoring
* refactoring
* minor fix
* minor refactoring
* fix code smell
|
2024-02-14 00:55:45 +01:00 |
|
Woosuk Kwon
|
a463c333dd
|
Use CuPy for CUDA graphs (#2811)
|
2024-02-13 11:32:06 -08:00 |
|
Kunshang Ji
|
96b6f475dd
|
Remove hardcoded device="cuda" to support more devices (#2503)
Co-authored-by: Jiang Li <jiang1.li@intel.com>
Co-authored-by: Kunshang Ji <kunshang.ji@intel.com>
|
2024-02-01 15:46:39 -08:00 |
|
Tao He
|
d69ff0cbbb
|
Fixes assertion failure in prefix caching: the lora index mapping should respect prefix_len (#2688)
Signed-off-by: Tao He <sighingnow@gmail.com>
|
2024-01-31 18:00:13 +01:00 |
|
zhaoyang-star
|
9090bf02e7
|
Support FP8-E5M2 KV Cache (#2279)
Co-authored-by: zhaoyang <zhao.yang16@zte.com.cn>
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
|
2024-01-28 16:43:54 -08:00 |
|
Hanzhi Zhou
|
380170038e
|
Implement custom all reduce kernels (#2192)
|
2024-01-27 12:46:35 -08:00 |
|
Antoni Baum
|
9b945daaf1
|
[Experimental] Add multi-LoRA support (#1804)
Co-authored-by: Chen Shen <scv119@gmail.com>
Co-authored-by: Shreyas Krishnaswamy <shrekris@anyscale.com>
Co-authored-by: Avnish Narayan <avnish@anyscale.com>
|
2024-01-23 15:26:37 -08:00 |
|
Cade Daniel
|
18bfcdd05c
|
[Speculative decoding 2/9] Multi-step worker for draft model (#2424)
|
2024-01-21 16:31:47 -08:00 |
|
Zhuohan Li
|
ef9b636e2d
|
Simplify broadcast logic for control messages (#2501)
|
2024-01-19 11:23:30 -08:00 |
|
shiyi.c_98
|
d10f8e1d43
|
[Experimental] Prefix Caching Support (#1669)
Co-authored-by: DouHappy <2278958187@qq.com>
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
|
2024-01-17 16:32:10 -08:00 |
|
Roy
|
9f659bf07f
|
[Minor] Optimize cuda graph memory usage (#2437)
|
2024-01-14 18:40:51 +01:00 |
|
Woosuk Kwon
|
35c4bc20d9
|
[Minor] Fix err msg (#2431)
|
2024-01-12 14:02:52 -08:00 |
|
Ben
|
cb7a1c1cbf
|
Suggest using dtype=half when OOM.
|
2024-01-12 12:33:29 -08:00 |
|
Jiaxiang
|
6549aef245
|
[DOC] Add additional comments for LLMEngine and AsyncLLMEngine (#1011)
|
2024-01-11 19:26:49 -08:00 |
|
Woosuk Kwon
|
c884819135
|
Fix eager mode performance (#2377)
|
2024-01-08 10:11:06 -08:00 |
|
Zhuohan Li
|
fd4ea8ef5c
|
Use NCCL instead of ray for control-plane communication to remove serialization overhead (#2221)
|
2024-01-03 11:30:22 -08:00 |
|
Hanzhi Zhou
|
31bff69151
|
Make _prepare_sample non-blocking and use pinned memory for input buffers (#2207)
|
2023-12-19 16:52:46 -08:00 |
|
Woosuk Kwon
|
21d5daa4ac
|
Add warning on CUDA graph memory usage (#2182)
|
2023-12-18 18:16:17 -08:00 |
|
Woosuk Kwon
|
c3372e87be
|
Remove dependency on CuPy (#2152)
|
2023-12-17 01:49:07 -08:00 |
|
Woosuk Kwon
|
e1d5402238
|
Fix all-reduce memory usage (#2151)
|
2023-12-17 01:44:45 -08:00 |
|
Woosuk Kwon
|
37ca558103
|
Optimize model execution with CUDA graph (#1926)
Co-authored-by: Chen Shen <scv119@gmail.com>
Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>
|
2023-12-16 21:12:08 -08:00 |
|
Woosuk Kwon
|
f1c8520146
|
[BugFix] Fix input positions for long context with sliding window (#2088)
|
2023-12-13 12:28:13 -08:00 |
|
Woosuk Kwon
|
30bad5c492
|
Fix peak memory profiling (#2031)
|
2023-12-12 22:01:53 -08:00 |
|
Woosuk Kwon
|
5f09cbdb63
|
Fix broken sampler tests (#1896)
Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>
|
2023-12-02 16:06:17 -08:00 |
|
Woosuk Kwon
|
27feead2f8
|
Refactor Worker & InputMetadata (#1843)
|
2023-11-29 22:16:37 -08:00 |
|
Yanming W
|
e0c6f556e8
|
[Build] Avoid building too many extensions (#1624)
|
2023-11-23 16:31:19 -08:00 |
|
boydfd
|
4bb6b67188
|
fix RAM OOM when load large models in tensor parallel mode. (#1395)
Co-authored-by: ran_lin <rlin@thoughtworks.com>
|
2023-11-20 19:02:42 -08:00 |
|
Simon Mo
|
5ffc0d13a2
|
Migrate linter from pylint to ruff (#1665)
|
2023-11-20 11:58:01 -08:00 |
|
Yanming W
|
8efe23f150
|
Fix input_metadata.selected_token_indices in worker prepare_inputs (#1546)
|
2023-11-08 14:19:12 -08:00 |
|
Antoni Baum
|
9738b84a08
|
Force paged attention v2 for long contexts (#1510)
|
2023-11-01 16:24:32 -07:00 |
|
Woosuk Kwon
|
0ce8647dc5
|
Fix integer overflows in attention & cache ops (#1514)
|
2023-10-31 15:19:30 -07:00 |
|