This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-01-16 17:44:30 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
entrypoints
/
openai
History
Mark McLoughlin
1cd981da4f
[V1][Metrics] Support
vllm:cache_config_info
(
#13299
)
2025-02-22 00:20:00 -08:00
..
correctness
…
reasoning_parsers
…
tool_parsers
…
__init__.py
…
test_async_tokenization.py
…
test_audio.py
…
test_basic.py
…
test_chat_echo.py
…
test_chat_template.py
…
test_chat.py
[HTTP Server] Make model param optional in request (
#13568
)
2025-02-21 21:55:50 -08:00
test_chunked_prompt.py
…
test_cli_args.py
…
test_completion.py
…
test_embedding.py
…
test_encoder_decoder.py
…
test_lora_adapters.py
…
test_metrics.py
[V1][Metrics] Support
vllm:cache_config_info
(
#13299
)
2025-02-22 00:20:00 -08:00
test_models.py
…
test_oot_registration.py
…
test_pooling.py
…
test_prompt_validation.py
…
test_rerank.py
…
test_return_tokens_as_ids.py
…
test_root_path.py
…
test_run_batch.py
…
test_score.py
…
test_serving_chat.py
…
test_serving_models.py
…
test_shutdown.py
…
test_sleep.py
…
test_tokenization.py
…
test_transcription_validation.py
…
test_video.py
…
test_vision_embedding.py
…
test_vision.py
…