This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-01-06 18:10:53 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
vllm
/
v1
/
core
History
Wei Wei
bf6a3d0ff5
[Misc] Add more scoping for improved trace (
#28329
)
...
Signed-off-by: Wei Wei <wwei6@meta.com>
2025-11-10 21:03:21 +00:00
..
sched
[Misc] Add more scoping for improved trace (
#28329
)
2025-11-10 21:03:21 +00:00
__init__.py
…
block_pool.py
[BugFix][LoRA] use adapter_id instead of id field of lora_request (
#27728
)
2025-11-03 10:08:08 +08:00
encoder_cache_manager.py
[Misc] Simplify max tokens in multimodal registry (
#27500
)
2025-10-24 23:56:01 -07:00
kv_cache_coordinator.py
…
kv_cache_manager.py
[Core][Perf] Only invoke save_new_computed_blocks when computed blocks are not empty (
#27799
)
2025-10-30 19:47:30 +00:00
kv_cache_utils.py
[Chore]:Extract math and argparse utilities to separate modules (
#27188
)
2025-10-26 04:03:32 -07:00
single_type_kv_cache_manager.py
[Core][Hybrid allocator + connector 2/n] Unify
remove_skipped_blocks
by
get_last_useful_token
(
#25431
)
2025-11-06 00:12:00 +00:00