mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced 2025-12-11 04:14:57 +08:00
610 B
610 B
(multi-modality)=
Multi-Modality
vLLM provides experimental support for multi-modal models through the {mod}vllm.multimodal package.
Multi-modal inputs can be passed alongside text and token prompts to supported models
via the multi_modal_data field in {class}vllm.inputs.PromptType.
Looking to add your own multi-modal model? Please follow the instructions listed here.
Module Contents
.. autodata:: vllm.multimodal.MULTIMODAL_REGISTRY
Submodules
:maxdepth: 1
inputs
parse
processing
profiling
registry