This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-03-31 23:47:07 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
vllm
/
engine
History
Thomas Parnell
7508a3dc34
[Misc] Fix typos in spec. decode metrics logging. (
#6470
)
...
Signed-off-by: Thomas Parnell <tpa@zurich.ibm.com>
2024-07-16 13:55:15 +00:00
..
output_processor
[ BugFix ] Prompt Logprobs Detokenization (
#6223
)
2024-07-11 22:02:29 +00:00
__init__.py
Change the name to vLLM (
#150
)
2023-06-17 03:07:40 -07:00
arg_utils.py
[CORE] Adding support for insertion of soft-tuned prompts (
#4645
)
2024-07-09 13:26:36 -07:00
async_llm_engine.py
[BugFix] get_and_reset only when scheduler outputs are not empty (
#6266
)
2024-07-11 07:40:20 -07:00
async_timeout.py
[Bugfix] AsyncLLMEngine hangs with asyncio.run (
#5654
)
2024-06-19 13:57:12 -07:00
llm_engine.py
[Bugfix] Fix usage stats logging exception warning with OpenVINO (
#6349
)
2024-07-12 10:47:00 +08:00
metrics.py
[Misc] Fix typos in spec. decode metrics logging. (
#6470
)
2024-07-16 13:55:15 +00:00