This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-10 11:26:15 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
models
/
quantization
History
Strahinja Stamenkovic
814843e021
Enable bitsandbytes quantization on AMD GPUs that use warp size 32 (
#27307
)
...
Signed-off-by: sstamenk <strahinja.stamenkovic@amd.com>
2025-11-19 03:12:31 +00:00
..
__init__.py
…
test_awq.py
…
test_bitblas.py
…
test_bitsandbytes.py
Enable bitsandbytes quantization on AMD GPUs that use warp size 32 (
#27307
)
2025-11-19 03:12:31 +00:00
test_fp8.py
[Chore] Separate out optional dependency checks from vllm.utils (
#27207
)
2025-10-22 10:44:21 -04:00
test_gguf.py
[Model] Add Gemma3 GGUF multimodal support (
#27772
)
2025-11-18 08:56:29 -08:00
test_gpt_oss_attn_quantization.py
[Quantization] fix attention quantization of gpt_oss model (
#27334
)
2025-11-11 12:06:00 -05:00
test_gptq_bitblas.py
…
test_gptq_marlin_24.py
…
test_gptq_marlin.py
…
test_modelopt.py
…
test_mxfp4.py
…
test_nvfp4.py
…