Logo
Explore Help
Sign In
xinyun/vllm
1
0
Fork 0
You've already forked vllm
mirror of https://git.datalinker.icu/vllm-project/vllm.git synced 2026-04-05 09:27:03 +08:00
Code Issues Packages Projects Releases Wiki Activity
vllm/vllm/model_executor/layers/rotary_embedding
History
Canlin Guo b9489f51e1
[Model][Perf] Use cos and sin cache in QwenVL (#28798)
Signed-off-by: gcanlin <canlinguosdu@gmail.com>
2025-11-18 11:51:54 +00:00
..
__init__.py
Add llama 4 scaling support (#28145)
2025-11-06 18:55:17 +00:00
base.py
[Model][Perf] Use cos and sin cache in QwenVL (#28798)
2025-11-18 11:51:54 +00:00
common.py
[Bugfix][ROCm] Fix ViT rotary embeddings for torch.compile compatibility on ROCm (#27748)
2025-11-03 17:12:19 -08:00
deepseek_scaling_rope.py
[RFC][ROCm][AITER] Keep all AITER kernels in _aiter_ops class like _custom_ops and _ipex_ops (#24490)
2025-11-10 08:20:53 -08:00
dual_chunk_rope.py
…
dynamic_ntk_alpha_rope.py
…
dynamic_ntk_scaling_rope.py
…
ernie45_vl_rope.py
…
linear_scaling_rope.py
…
llama3_rope.py
…
llama4_vision_rope.py
[XPU][bugfix] fix rope for llama4 and deepseek (#25145)
2025-10-30 09:43:13 +08:00
mrope.py
[Bugfix][CPU] Fix MRoPE dispatch on the CPU backend (#27800)
2025-10-30 15:12:05 +00:00
ntk_scaling_rope.py
…
phi3_long_rope_scaled_rope.py
…
yarn_scaling_rope.py
Add llama 4 scaling support (#28145)
2025-11-06 18:55:17 +00:00
Powered by Gitea Version: 1.23.1 Page: 1693ms Template: 5ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API