Logo
Explore Help
Sign In
xinyun/vllm
1
0
Fork 0
You've already forked vllm
mirror of https://git.datalinker.icu/vllm-project/vllm.git synced 2025-12-15 10:35:01 +08:00
Code Issues Packages Projects Releases Wiki Activity
vllm/vllm/model_executor/layers
History
Kunshang Ji e9da5a40c6
[Misc] Add indirection layer for custom ops (#3913)
2024-04-10 20:26:07 -07:00
..
fused_moe
[Misc] Add indirection layer for custom ops (#3913)
2024-04-10 20:26:07 -07:00
ops
[CI] Try introducing isort. (#3495)
2024-03-25 07:59:47 -07:00
quantization
[Misc] Add indirection layer for custom ops (#3913)
2024-04-10 20:26:07 -07:00
__init__.py
Change the name to vLLM (#150)
2023-06-17 03:07:40 -07:00
activation.py
[Misc] Add indirection layer for custom ops (#3913)
2024-04-10 20:26:07 -07:00
layernorm.py
[Misc] Add indirection layer for custom ops (#3913)
2024-04-10 20:26:07 -07:00
linear.py
[Core][Refactor] move parallel_utils into vllm/distributed (#3950)
2024-04-10 15:33:30 -07:00
logits_processor.py
[Core][Refactor] move parallel_utils into vllm/distributed (#3950)
2024-04-10 15:33:30 -07:00
rejection_sampler.py
[CI] Try introducing isort. (#3495)
2024-03-25 07:59:47 -07:00
rotary_embedding.py
[Misc] Add indirection layer for custom ops (#3913)
2024-04-10 20:26:07 -07:00
sampler.py
[Bugfix] handle prompt_logprobs in _apply_min_tokens_penalty (#3876)
2024-04-10 01:39:56 -07:00
vocab_parallel_embedding.py
[Core][Refactor] move parallel_utils into vllm/distributed (#3950)
2024-04-10 15:33:30 -07:00
Powered by Gitea Version: 1.23.1 Page: 380ms Template: 4ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API