This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-21 03:15:29 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
vllm
/
model_executor
History
Julien Denize
5e5646e206
[BUGFIX] llama_4_scaling wrongly passed to DeepseekAttention (
#29908
)
...
Signed-off-by: juliendenize <julien.denize@mistral.ai>
2025-12-02 14:51:20 -08:00
..
layers
[Attention][CUDAGraph] Remove CG padding from attention backends (
#29352
)
2025-12-02 13:48:08 -05:00
model_loader
[Chore]: Reorganize model repo operating functions in
transformers_utils
(
#29680
)
2025-11-28 08:46:51 -08:00
models
[BUGFIX] llama_4_scaling wrongly passed to DeepseekAttention (
#29908
)
2025-12-02 14:51:20 -08:00
warmup
[Core] Encoder separation for Encode-Prefill-Decode Disaggregation (
#25233
)
2025-11-11 18:58:33 -08:00
__init__.py
…
custom_op.py
…
parameter.py
…
utils.py
[CI] Fix mypy for
vllm/v1/worker
(
#29037
)
2025-11-21 11:36:07 +08:00