This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-22 17:55:01 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
v1
/
core
History
Andy Lo
58ce8d12b7
[BugFix] Priority scheduling and spec tokens preemption (
#28558
)
...
Signed-off-by: Andy Lo <andy@mistral.ai>
2025-11-12 20:29:21 +00:00
..
__init__.py
…
test_async_scheduler.py
[AsyncScheduling] Don't schedule past request max_tokens (
#27922
)
2025-11-04 17:06:28 +00:00
test_encoder_cache_manager.py
…
test_kv_cache_utils.py
…
test_kv_sharing.py
…
test_prefix_caching.py
[BugFix][LoRA] use adapter_id instead of id field of lora_request (
#27728
)
2025-11-03 10:08:08 +08:00
test_priority_scheduler_random.py
[BugFix] Priority scheduling and spec tokens preemption (
#28558
)
2025-11-12 20:29:21 +00:00
test_scheduler_e2e.py
…
test_scheduler.py
[Core] Encoder separation for Encode-Prefill-Decode Disaggregation (
#25233
)
2025-11-11 18:58:33 -08:00
test_single_type_kv_cache_manager.py
…
utils.py
[Core] Encoder separation for Encode-Prefill-Decode Disaggregation (
#25233
)
2025-11-11 18:58:33 -08:00