This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-01-08 03:25:18 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
vllm
/
model_executor
History
Harry Mellor
51c599f0ec
Skip models that cannot currently init on Transformers v5 (
#28471
)
...
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
2025-11-12 23:43:57 +00:00
..
layers
[MoE][Kernel][Perf] Improve Shared Expert Stream Overlap (
#28406
)
2025-11-12 23:37:24 +00:00
model_loader
Skip models that cannot currently init on Transformers v5 (
#28471
)
2025-11-12 23:43:57 +00:00
models
Skip models that cannot currently init on Transformers v5 (
#28471
)
2025-11-12 23:43:57 +00:00
warmup
[Core] Encoder separation for Encode-Prefill-Decode Disaggregation (
#25233
)
2025-11-11 18:58:33 -08:00
__init__.py
…
custom_op.py
…
parameter.py
…
utils.py
[Chore] Clean up pytorch helper functions in
vllm.utils
(
#26908
)
2025-10-18 09:48:22 -07:00