diff --git a/docs/models/hardware_supported_models/cpu.md b/docs/models/hardware_supported_models/cpu.md index 0832755f8fbe2..811778b2ad529 100644 --- a/docs/models/hardware_supported_models/cpu.md +++ b/docs/models/hardware_supported_models/cpu.md @@ -1,25 +1,33 @@ # CPU - Intel® Xeon® +## Validated Hardware + +| Hardware | +| ----------------------------------------- | +| [Intel® Xeon® 6 Processors](https://www.intel.com/content/www/us/en/products/details/processors/xeon.html) | +| [Intel® Xeon® 5 Processors](https://www.intel.com/content/www/us/en/products/docs/processors/xeon/5th-gen-xeon-scalable-processors.html) | + ## Supported Models ### Text-only Language Models | Model | Architecture | Supported | |--------------------------------------|-------------------------------------------|-----------| -| meta-llama/Llama-3.1 / 3.3 | LlamaForCausalLM | ✅ | -| meta-llama/Llama-4-Scout | Llama4ForConditionalGeneration | ✅ | -| meta-llama/Llama-4-Maverick | Llama4ForConditionalGeneration | ✅ | -| ibm-granite/granite (Granite-MOE) | GraniteMoeForCausalLM | ✅ | -| Qwen/Qwen3 | Qwen3ForCausalLM | ✅ | -| zai-org/GLM-4.5 | GLMForCausalLM | ✅ | -| google/gemma | GemmaForCausalLM | ✅ | +| meta-llama/Llama-3.1-8B-Instruct | LlamaForCausalLM | ✅ | +| meta-llama/Llama-3.2-3B-Instruct | LlamaForCausalLM | ✅ | +| ibm-granite/granite-3.2-2b-instruct | GraniteForCausalLM | ✅ | +| Qwen/Qwen3-1.7B | Qwen3ForCausalLM | ✅ | +| Qwen/Qwen3-4B | Qwen3ForCausalLM | ✅ | +| Qwen/Qwen3-8B | Qwen3ForCausalLM | ✅ | +| zai-org/glm-4-9b-hf | GLMForCausalLM | ✅ | +| google/gemma-7b | GemmaForCausalLM | ✅ | ### Multimodal Language Models | Model | Architecture | Supported | |--------------------------------------|-------------------------------------------|-----------| -| Qwen/Qwen2.5-VL | Qwen2VLForConditionalGeneration | ✅ | -| openai/whisper | WhisperForConditionalGeneration | ✅ | +| Qwen/Qwen2.5-VL-7B-Instruct | Qwen2VLForConditionalGeneration | ✅ | +| openai/whisper-large-v3 | WhisperForConditionalGeneration | ✅ | ✅ Runs and optimized. 🟨 Runs and correct but not optimized to green yet. diff --git a/docs/models/hardware_supported_models/xpu.md b/docs/models/hardware_supported_models/xpu.md new file mode 100644 index 0000000000000..7b8dcf5c9af26 --- /dev/null +++ b/docs/models/hardware_supported_models/xpu.md @@ -0,0 +1,65 @@ +# XPU - Intel® GPUs + +## Validated Hardware + +| Hardware | +| ----------------------------------------- | +| [Intel® Arc™ Pro B-Series Graphics](https://www.intel.com/content/www/us/en/products/docs/discrete-gpus/arc/workstations/b-series/overview.html) | + +## Supported Models + +### Text-only Language Models + +| Model | Architecture | FP16 | Dynamic FP8 | MXFP4 | +| ----------------------------------------- | ---------------------------------------------------- | ---- | ----------- | ----- | +| openai/gpt-oss-20b | GPTForCausalLM | | | ✅ | +| openai/gpt-oss-120b | GPTForCausalLM | | | ✅ | +| deepseek-ai/DeepSeek-R1-Distill-Llama-8B | LlamaForCausalLM | ✅ | ✅ | | +| deepseek-ai/DeepSeek-R1-Distill-Qwen-14B | QwenForCausalLM | ✅ | ✅ | | +| deepseek-ai/DeepSeek-R1-Distill-Qwen-32B | QwenForCausalLM | ✅ | ✅ | | +| deepseek-ai/DeepSeek-R1-Distill-Llama-70B | LlamaForCausalLM | ✅ | ✅ | | +| Qwen/Qwen2.5-72B-Instruct | Qwen2ForCausalLM | ✅ | ✅ | | +| Qwen/Qwen3-14B | Qwen3ForCausalLM | ✅ | ✅ | | +| Qwen/Qwen3-32B | Qwen3ForCausalLM | ✅ | ✅ | | +| Qwen/Qwen3-30B-A3B | Qwen3ForCausalLM | ✅ | ✅ | | +| Qwen/Qwen3-30B-A3B-GPTQ-Int4 | Qwen3ForCausalLM | ✅ | ✅ | | +| Qwen/Qwen3-coder-30B-A3B-Instruct | Qwen3ForCausalLM | ✅ | ✅ | | +| Qwen/QwQ-32B | QwenForCausalLM | ✅ | ✅ | | +| deepseek-ai/DeepSeek-V2-Lite | DeepSeekForCausalLM | ✅ | ✅ | | +| meta-llama/Llama-3.1-8B-Instruct | LlamaForCausalLM | ✅ | ✅ | | +| baichuan-inc/Baichuan2-13B-Chat | BaichuanForCausalLM | ✅ | ✅ | | +| THUDM/GLM-4-9B-chat | GLMForCausalLM | ✅ | ✅ | | +| THUDM/CodeGeex4-All-9B | CodeGeexForCausalLM | ✅ | ✅ | | +| chuhac/TeleChat2-35B | LlamaForCausalLM (TeleChat2 based on Llama arch) | ✅ | ✅ | | +| 01-ai/Yi1.5-34B-Chat | YiForCausalLM | ✅ | ✅ | | +| THUDM/CodeGeex4-All-9B | CodeGeexForCausalLM | ✅ | ✅ | | +| deepseek-ai/DeepSeek-Coder-33B-base | DeepSeekCoderForCausalLM | ✅ | ✅ | | +| baichuan-inc/Baichuan2-13B-Chat | BaichuanForCausalLM | ✅ | ✅ | | +| meta-llama/Llama-2-13b-chat-hf | LlamaForCausalLM | ✅ | ✅ | | +| THUDM/CodeGeex4-All-9B | CodeGeexForCausalLM | ✅ | ✅ | | +| Qwen/Qwen1.5-14B-Chat | QwenForCausalLM | ✅ | ✅ | | +| Qwen/Qwen1.5-32B-Chat | QwenForCausalLM | ✅ | ✅ | | + +### Multimodal Language Models + +| Model | Architecture | FP16 | Dynamic FP8 | MXFP4 | +| ---------------------------- | -------------------------------- | ---- | ----------- | ----- | +| OpenGVLab/InternVL3_5-8B | InternVLForConditionalGeneration | ✅ | ✅ | | +| OpenGVLab/InternVL3_5-14B | InternVLForConditionalGeneration | ✅ | ✅ | | +| OpenGVLab/InternVL3_5-38B | InternVLForConditionalGeneration | ✅ | ✅ | | +| Qwen/Qwen2-VL-7B-Instruct | Qwen2VLForConditionalGeneration | ✅ | ✅ | | +| Qwen/Qwen2.5-VL-72B-Instruct | Qwen2VLForConditionalGeneration | ✅ | ✅ | | +| Qwen/Qwen2.5-VL-32B-Instruct | Qwen2VLForConditionalGeneration | ✅ | ✅ | | +| THUDM/GLM-4v-9B | GLM4vForConditionalGeneration | ✅ | ✅ | | +| openbmb/MiniCPM-V-4 | MiniCPMVForConditionalGeneration | ✅ | ✅ | | + +### Embedding and Reranker Language Models + +| Model | Architecture | FP16 | Dynamic FP8 | MXFP4 | +| ----------------------- | ------------------------------ | ---- | ----------- | ----- | +| Qwen/Qwen3-Embedding-8B | Qwen3ForTextEmbedding | ✅ | ✅ | | +| Qwen/Qwen3-Reranker-8B | Qwen3ForSequenceClassification | ✅ | ✅ | | + +✅ Runs and optimized. +🟨 Runs and correct but not optimized to green yet. +❌ Does not pass accuracy test or does not run.