This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-27 05:55:15 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
vllm
/
model_executor
History
Pleaplusone
b41aeb3468
[Bugfix][ROCm] Fix load issue on deepseek quark quantization when shared expert enabled (
#31261
)
...
Signed-off-by: ganyi <ygan@amd.com>
2025-12-24 16:47:44 +08:00
..
layers
[Mamba] - Consolidate Mambas Attention Logic (
#28133
)
2025-12-23 21:57:00 +01:00
model_loader
[BugFix] skip language model in Encoder (
#30242
)
2025-12-22 05:25:59 -08:00
models
[Bugfix][ROCm] Fix load issue on deepseek quark quantization when shared expert enabled (
#31261
)
2025-12-24 16:47:44 +08:00
warmup
[UX] Reduce DeepGEMM warmup log output to single progress bar (
#30903
)
2025-12-17 20:21:51 -08:00
__init__.py
…
custom_op.py
[CustomOp] Support object-level enable for CustomOp (
#30547
)
2025-12-15 11:02:09 +08:00
parameter.py
…
utils.py
[Quantization] FP8 Weight Reloading for Quantized RL Rollout (
#28480
)
2025-12-09 13:54:32 -08:00