[BugFix] Force registration of w8a8_block_fp8_matmul_deepgemm via lazy import (#19514)

Signed-off-by: Varun Sundar Rabindranath <vsundarr@redhat.com>
Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com>
This commit is contained in:
Varun Sundar Rabindranath 2025-06-12 00:28:12 -04:00 committed by GitHub
parent 2f1c19b245
commit e5d35d62f5
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -143,6 +143,7 @@ def apply_w8a8_block_fp8_linear(
column_major_scales=True,
)
import vllm.model_executor.layers.quantization.deepgemm # noqa: F401
output = torch.ops.vllm.w8a8_block_fp8_matmul_deepgemm(
q_input,
weight,