This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-28 11:38:41 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
vllm
/
model_executor
History
amitz-nv
6038b1b04b
[Frontend][Model] Add 'float16' to possible mamba cache dtype values, override mamba SSM cache dtype value for NemotronH (
#29978
)
...
Signed-off-by: amitz-nv <203509407+amitz-nv@users.noreply.github.com>
2025-12-05 00:34:33 -08:00
..
layers
[Perf] Enable separate shared_experts stream only for CUDA (
#30085
)
2025-12-05 00:03:17 +00:00
model_loader
[Bugfix][Quantization] Support BF16 tensors on GGUF (
#29948
)
2025-12-03 10:33:46 +00:00
models
[Frontend][Model] Add 'float16' to possible mamba cache dtype values, override mamba SSM cache dtype value for NemotronH (
#29978
)
2025-12-05 00:34:33 -08:00
warmup
[Core] Encoder separation for Encode-Prefill-Decode Disaggregation (
#25233
)
2025-11-11 18:58:33 -08:00
__init__.py
…
custom_op.py
…
parameter.py
…
utils.py
[CI] Fix mypy for
vllm/v1/worker
(
#29037
)
2025-11-21 11:36:07 +08:00