This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-10 08:45:00 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
entrypoints
History
tomeras91
395b1c7454
[Frontend] don't block event loop in tokenization (preprocess) in OpenAI compatible server (
#10635
)
...
Signed-off-by: Tomer Asida <tomera@ai21.com>
2024-11-27 13:21:10 -08:00
..
llm
[ci] fix slow tests (
#10698
)
2024-11-27 09:26:14 -08:00
offline_mode
[Bugfix] Fix offline mode when using
mistral_common
(
#9457
)
2024-10-18 18:12:32 -07:00
openai
[Frontend] don't block event loop in tokenization (preprocess) in OpenAI compatible server (
#10635
)
2024-11-27 13:21:10 -08:00
__init__.py
[CI/Build] Move
test_utils.py
to
tests/utils.py
(
#4425
)
2024-05-13 23:50:09 +09:00
conftest.py
Support for guided decoding for offline LLM (
#6878
)
2024-08-04 03:12:09 +00:00
test_chat_utils.py
[Bugfix][Frontend] Update Llama Chat Templates to also support Non-Tool use (
#10164
)
2024-11-23 10:17:38 +08:00