This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-01-25 00:24:29 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
vllm
/
engine
/
output_processor
History
Robert Shaw
7ed6a4f0e1
[ BugFix ] Prompt Logprobs Detokenization (
#6223
)
...
Co-authored-by: Zifei Tong <zifeitong@gmail.com>
2024-07-11 22:02:29 +00:00
..
__init__.py
[Speculative decoding 6/9] Integrate speculative decoding with LLMEngine (
#3894
)
2024-04-16 13:09:21 -07:00
interfaces.py
[Core] Pipeline Parallel Support (
#4412
)
2024-07-02 10:58:08 -07:00
multi_step.py
[Core] Pipeline Parallel Support (
#4412
)
2024-07-02 10:58:08 -07:00
single_step.py
[ BugFix ] Prompt Logprobs Detokenization (
#6223
)
2024-07-11 22:02:29 +00:00
stop_checker.py
[Bugfix] Remove the last EOS token unless explicitly specified (
#5077
)
2024-05-28 17:15:35 -07:00
util.py
[Core] Consolidate prompt arguments to LLM engines (
#4328
)
2024-05-28 13:29:31 -07:00