This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-01-17 14:34:36 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
vllm
/
v1
/
core
History
Chen Zhang
71df2a57ef
[Hybrid Allocator] Better layer padding strategy for gpt-oss eagle (
#29303
)
...
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
2025-11-24 14:28:32 -08:00
..
sched
[Misc] Further clean up chunked prefill and prefix caching init (
#29186
)
2025-11-22 19:34:15 +08:00
__init__.py
…
block_pool.py
[BugFix][LoRA] use adapter_id instead of id field of lora_request (
#27728
)
2025-11-03 10:08:08 +08:00
encoder_cache_manager.py
…
kv_cache_coordinator.py
[Feature] Prefill Context Parallel (PCP) basic support (
#28718
)
2025-11-19 15:52:44 -05:00
kv_cache_manager.py
[Feature] Prefill Context Parallel (PCP) basic support (
#28718
)
2025-11-19 15:52:44 -05:00
kv_cache_utils.py
[Hybrid Allocator] Better layer padding strategy for gpt-oss eagle (
#29303
)
2025-11-24 14:28:32 -08:00
single_type_kv_cache_manager.py
[Feature] Prefill Context Parallel (PCP) basic support (
#28718
)
2025-11-19 15:52:44 -05:00