This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-03-20 08:30:10 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
entrypoints
/
llm
History
Nick Hill
70116459c3
[BugFix][Frontend] Fix
LLM.chat()
tokenization (
#16081
)
...
Signed-off-by: Nick Hill <nhill@redhat.com>
2025-04-25 22:20:05 +00:00
..
__init__.py
…
test_accuracy.py
[TPU] Support sliding window and logit soft capping in the paged attention kernel for TPU. (
#15732
)
2025-04-03 14:23:28 -07:00
test_chat.py
[BugFix][Frontend] Fix
LLM.chat()
tokenization (
#16081
)
2025-04-25 22:20:05 +00:00
test_collective_rpc.py
…
test_encode.py
…
test_generate_multiple_loras.py
…
test_generate.py
…
test_gpu_utilization.py
…
test_guided_generate.py
[Bugfix] remove fallback in guided_json (int range, patterns) (
#16725
)
2025-04-25 06:54:43 +00:00
test_init.py
…
test_lazy_outlines.py
…
test_prompt_validation.py
[Bugfix] Proper input validation for multi-modal encoder-decoder models (
#16156
)
2025-04-08 09:45:21 -07:00