This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-14 18:25:01 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
models
/
quantization
History
Tsukasa OI
42c1949643
[Bugfix][Quantization] Support BF16 tensors on GGUF (
#29948
)
...
Signed-off-by: Tsukasa OI <floss_llm@irq.a4lg.com>
2025-12-03 10:33:46 +00:00
..
__init__.py
…
test_awq.py
…
test_bitblas.py
…
test_bitsandbytes.py
Default model load/config/tokenizer to
mistral
format if relevant files exist (
#28659
)
2025-11-21 13:58:59 -08:00
test_fp8.py
[Misc] Remove redundant attention var constants (
#29650
)
2025-11-28 04:35:19 -08:00
test_gguf.py
[Bugfix][Quantization] Support BF16 tensors on GGUF (
#29948
)
2025-12-03 10:33:46 +00:00
test_gpt_oss_attn_quantization.py
[Quantization] fix attention quantization of gpt_oss model (
#27334
)
2025-11-11 12:06:00 -05:00
test_gptq_bitblas.py
…
test_gptq_marlin_24.py
…
test_gptq_marlin.py
…
test_modelopt.py
…
test_mxfp4.py
…
test_nvfp4.py
…