Logo
Explore Help
Sign In
xinyun/vllm
1
0
Fork 0
You've already forked vllm
mirror of https://git.datalinker.icu/vllm-project/vllm.git synced 2025-12-28 16:31:49 +08:00
Code Issues Packages Projects Releases Wiki Activity
vllm/vllm/model_executor
History
Po-Han Huang (NVIDIA) 67c153b88a
Fix Llama4 FlashInfer FP4 MoE issues (#22511)
Signed-off-by: Po-Han Huang <pohanh@nvidia.com>
2025-08-12 05:50:59 -07:00
..
layers
Fix Llama4 FlashInfer FP4 MoE issues (#22511)
2025-08-12 05:50:59 -07:00
model_loader
Enable 4bit bnb prequant MOE (#21548)
2025-08-11 19:02:14 -07:00
models
[CI Failure] fix tests/entrypoints/openai/test_skip_tokenizer.py (#22708)
2025-08-12 05:42:58 -07:00
warmup
[Docs] Fix warnings in docs build (#22588)
2025-08-10 05:49:51 -07:00
__init__.py
[Misc] Add SPDX-FileCopyrightText (#19100)
2025-06-03 11:20:17 -07:00
custom_op.py
Optimize configuration access with LRU cache in custom ops (#22204)
2025-08-04 21:43:24 -07:00
parameter.py
[Misc] Add SPDX-FileCopyrightText (#19100)
2025-06-03 11:20:17 -07:00
pooling_metadata.py
[Model][1/N] Support multiple poolers at model level (#21227)
2025-07-21 02:22:21 -07:00
sampling_metadata.py
Revert "Update sampling_metadata.py (#21937)" (#22088)
2025-08-01 05:24:46 -07:00
utils.py
[Quantization] Enable BNB support for InternS1 (#21953)
2025-08-01 11:09:54 +00:00
Powered by Gitea Version: 1.23.1 Page: 680ms Template: 5ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API