This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-25 19:21:53 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
v1
/
core
History
rongfu.leng
4716377fbc
[Feature] Estimate max-model-len use available KV cache memory (
#16168
)
...
Signed-off-by: rongfu.leng <rongfu.leng@daocloud.io>
2025-04-08 19:12:51 -07:00
..
test_kv_cache_utils.py
[Feature] Estimate max-model-len use available KV cache memory (
#16168
)
2025-04-08 19:12:51 -07:00
test_prefix_caching.py
[V1] Implement sliding window attention in kv_cache_manager (
#14097
)
2025-04-01 00:33:17 -07:00
test_scheduler_e2e.py
[V1] Support long_prefill_token_threshold in v1 scheduler (
#15419
)
2025-03-25 14:22:26 -07:00
test_scheduler.py
[V1] Add
disable_chunked_mm_input
arg to disable partial mm input prefill (
#15837
)
2025-04-07 23:24:07 -07:00
test_specialized_manager.py
[V1] Implement sliding window attention in kv_cache_manager (
#14097
)
2025-04-01 00:33:17 -07:00