[Docs] Add GPT2ForSequenceClassification to supported models in docs (#19932)

Signed-off-by: nie3e <adrcwiek@gmail.com>
This commit is contained in:
Adrian 2025-06-21 22:57:19 +02:00 committed by GitHub
parent 2c5302fadd
commit 3b1e4c6a23
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -445,7 +445,7 @@ Specified using `--task classify`.
| Architecture | Models | Example HF Models | [LoRA][lora-adapter] | [PP][distributed-serving] | [V1](gh-issue:8779) |
|----------------------------------|----------|----------------------------------------|------------------------|-----------------------------|-----------------------|
| `JambaForSequenceClassification` | Jamba | `ai21labs/Jamba-tiny-reward-dev`, etc. | ✅︎ | ✅︎ | |
| `GPT2ForSequenceClassification` | GPT2 | `nie3e/sentiment-polish-gpt2-small` | | | |
If your model is not in the above list, we will try to automatically convert the model using
[as_classification_model][vllm.model_executor.models.adapters.as_classification_model]. By default, the class probabilities are extracted from the softmaxed hidden state corresponding to the last token.