This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-22 17:25:41 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
vllm
/
distributed
History
Yihua Cheng
94a9ebcf31
[KV connector][WIP] KV cache proxy based on LMCache multi-process mode (
#27902
)
...
Signed-off-by: ApostaC <yihua98@uchicago.edu>
2025-11-12 20:25:43 +00:00
..
device_communicators
[BugFix] Graceful handling of torch symm mem errors. (
#27671
)
2025-11-11 17:41:54 -07:00
ec_transfer
[CI/Build] Fix crash due to removed VLLM_USE_V1 attribute in EPD (
#28521
)
2025-11-11 23:09:33 -08:00
eplb
[EPLB] Refactor balance_packing to use numpy and optimize GPU-CPU transfers in EPLB (
#28369
)
2025-11-11 00:19:51 -08:00
kv_transfer
[KV connector][WIP] KV cache proxy based on LMCache multi-process mode (
#27902
)
2025-11-12 20:25:43 +00:00
__init__.py
…
communication_op.py
…
kv_events.py
Fix EventPublisherFactory logic for disabled KV cache events (
#27419
)
2025-10-24 05:00:01 +00:00
parallel_state.py
[Perf] Move gc.freeze logic from EngineCoreProc to EngineCore for better coverage (
#27896
)
2025-11-10 15:34:18 -08:00
tpu_distributed_utils.py
…
utils.py
[Chore] Separate out
vllm.utils.network_utils
(
#27164
)
2025-10-19 03:06:32 -07:00