Julien Denize 57430fc95c
Default model load/config/tokenizer to mistral format if relevant files exist (#28659)
Signed-off-by: Julien Denize <julien.denize@mistral.ai>
Signed-off-by: Julien Denize <40604584+juliendenize@users.noreply.github.com>
Signed-off-by: mgoin <mgoin64@gmail.com>
Signed-off-by: Michael Goin <mgoin64@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: mgoin <mgoin64@gmail.com>
2025-11-21 13:58:59 -08:00
..

Features

Compatibility Matrix

The tables below show mutually exclusive features and the support on some hardware.

The symbols used have the following meanings:

  • = Full compatibility
  • 🟠 = Partial compatibility
  • = No compatibility
  • = Unknown or TBD

!!! note Check the or 🟠 with links to see tracking issue for unsupported feature/hardware combination.

Feature x Feature

Feature CP APC LoRA SD CUDA graph pooling enc-dec logP prmpt logP async output multi-step mm best-of beam-search prompt-embeds
CP
APC
LoRA
SD
CUDA graph
pooling 🟠* 🟠*
enc-dec
logP
prmpt logP
async output
multi-step
mm 🟠^
best-of
beam-search
prompt-embeds

* Chunked prefill and prefix caching are only applicable to last-token pooling.
^ LoRA is only applicable to the language backbone of multimodal models.

Feature x Hardware

Feature Volta Turing Ampere Ada Hopper CPU AMD Intel GPU
CP
APC
LoRA
SD 🟠
CUDA graph
pooling
enc-dec
mm 🟠
prompt-embeds
logP
prmpt logP
async output
multi-step
best-of
beam-search

!!! note For information on feature support on Google TPU, please refer to the TPU-Inference Recommended Models and Features documentation.