This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-01-09 01:31:49 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
vllm
/
model_executor
History
Isotr0py
7e06c40e63
[Bugfix] Fix broken MRoPE for GLM-4.1V/GLM-4.5V (
#27860
)
...
Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
2025-10-31 17:04:51 +00:00
..
layers
[Perf] Decouple torch op from GDA to leverage torch.compile (
#27871
)
2025-10-31 21:35:52 +08:00
model_loader
[Feat] Adds runai distributed streamer (
#27230
)
2025-10-29 21:09:10 -07:00
models
[Bugfix] Fix broken MRoPE for GLM-4.1V/GLM-4.5V (
#27860
)
2025-10-31 17:04:51 +00:00
warmup
[BugFix] Stopgap - Flashinfer Autotuner + GPT-OSS + DP/TP (
#27762
)
2025-10-30 08:24:31 -07:00
__init__.py
…
custom_op.py
[FrontEnd] UNREVERT CompilationConfig overhaul (
#20283
): deprecate use_inductor in favor of backend, simplify custom_ops (
#26502
)
2025-10-13 22:47:16 +00:00
parameter.py
[Docs] Replace
rst
style double-backtick with
md
single-backtick (
#27091
)
2025-10-17 02:47:34 -07:00
utils.py
[Chore] Clean up pytorch helper functions in
vllm.utils
(
#26908
)
2025-10-18 09:48:22 -07:00