This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-03-20 03:07:57 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
quantization
History
Joe Runde
380e18639f
🐛
fix torch memory profiling (
#9516
)
...
Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>
2024-10-18 21:25:19 -04:00
..
__init__.py
…
test_bitsandbytes.py
🐛
fix torch memory profiling (
#9516
)
2024-10-18 21:25:19 -04:00
test_compressed_tensors.py
[Misc] Directly use compressed-tensors for checkpoint definitions (
#8909
)
2024-10-15 15:40:25 -07:00
test_configs.py
[Model] Add user-configurable task for models that support both generation and embedding (
#9424
)
2024-10-18 11:31:58 -07:00
test_cpu_offload.py
[ci][test] adjust max wait time for cpu offloading test (
#7709
)
2024-08-20 17:12:44 -07:00
test_experts_int8.py
…
test_fp8.py
[CI/Build] Avoid CUDA initialization (
#8534
)
2024-09-18 10:38:11 +00:00
test_ipex_quant.py
[Hardware][CPU] Support AWQ for CPU backend (
#7515
)
2024-10-09 10:28:08 -06:00
test_lm_head.py
…
utils.py
[CI/Build] Avoid CUDA initialization (
#8534
)
2024-09-18 10:38:11 +00:00