The best LLMs for your use case:
Qwen's native multimodal MoE model with 397B total parameters and 17B active, featuring hybrid Gated Delta Networks for strong reasoning and vision capabilities.
Speed:
Intelligence:
Price: (1M Tokens)
$0.60 / 3.60Inputs:
JSON Mode:
Function Calling:
Benchmarks:
MMMU
Multimodal - Vision
GPQA-Diamond
General Knowledge
MMLU-Pro
General Knowledge
LongBenchv2
Summarization
Multilingual MMLU
Multilingual
BFCL
Agents and Function Calling
LMArena
Chat
LiveCodeBench
Code
MMMU
Multimodal - Vision
GPQA-Diamond
General Knowledge
MMLU-Pro
General Knowledge
LongBenchv2
Summarization
Multilingual MMLU
Multilingual
BFCL
Agents and Function Calling
LMArena
Chat
LiveCodeBench
Code
1T-parameter MoE reasoning model with state-of-the-art performance on math, code, and multimodal tasks.
Speed:
Intelligence:
Price: (1M Tokens)
$0.50 / 2.80Inputs:
JSON Mode:
Function Calling:
Benchmarks:
MMMU
Multimodal - Vision
LiveCodeBench
Code
GPQA-Diamond
General Knowledge
MMLU-Pro
General Knowledge
LMArena
Chat
WebDevArena
Code
LongBenchv2
Summarization
SimpleQA
General Knowledge
MMMU
Multimodal - Vision
LiveCodeBench
Code
GPQA-Diamond
General Knowledge
MMLU-Pro
General Knowledge
LMArena
Chat
WebDevArena
Code
LongBenchv2
Summarization
SimpleQA
General Knowledge
Use case:
Multimodal - Vision