The best LLMs for your use case:

1MiniMax-M2.5MiniMax

Reasoning-focused large model from MiniMax with long-context support and strong frontier benchmark performance.

Speed:

Intelligence:

Price: (1M Tokens)

$0.30 / 1.20

Inputs:

ImageText

JSON Mode:

Function Calling:

Benchmarks:

#1

BFCL

Agents and Function Calling

76.8
#4

GPQA-Diamond

General Knowledge

85.2
#5

SimpleQA

General Knowledge

44
#7

LiveCodeBench

Code

74.1
#7

MMMU

Multimodal - Vision

68
#15

MMLU-Pro

General Knowledge

76.5
#1

BFCL

Agents and Function Calling

76.8
#4

GPQA-Diamond

General Knowledge

85.2
#5

SimpleQA

General Knowledge

44
#7

LiveCodeBench

Code

74.1
#7

MMMU

Multimodal - Vision

68
#15

MMLU-Pro

General Knowledge

76.5
2Qwen3.5 397B-A17BQwen

Qwen's native multimodal MoE model with 397B total parameters and 17B active, featuring hybrid Gated Delta Networks for strong reasoning and vision capabilities.

Speed:

Intelligence:

Price: (1M Tokens)

$0.60 / 3.60

Inputs:

ImageText

JSON Mode:

Function Calling:

Benchmarks:

#2

BFCL

Agents and Function Calling

72.9
#1

GPQA-Diamond

General Knowledge

88.4
#1

MMLU-Pro

General Knowledge

87.8
#1

LongBenchv2

Summarization

63.2
#1

Multilingual MMLU

Multilingual

88.5
#1

MMMU

Multimodal - Vision

85
#3

LMArena

Chat

1447
#3

LiveCodeBench

Code

83.6
#2

BFCL

Agents and Function Calling

72.9
#1

GPQA-Diamond

General Knowledge

88.4
#1

MMLU-Pro

General Knowledge

87.8
#1

LongBenchv2

Summarization

63.2
#1

Multilingual MMLU

Multilingual

88.5
#1

MMMU

Multimodal - Vision

85
#3

LMArena

Chat

1447
#3

LiveCodeBench

Code

83.6

Use case:

Agents and Function Calling