The best LLMs for your use case:

1Qwen3.5 397B-A17BQwen

Qwen's native multimodal MoE model with 397B total parameters and 17B active, featuring hybrid Gated Delta Networks for strong reasoning and vision capabilities.

Speed:

Intelligence:

Price: (1M Tokens)

$0.60 / 3.60

Inputs:

ImageText

JSON Mode:

Function Calling:

Benchmarks:

#1

MMMU

Multimodal - Vision

85
#1

GPQA-Diamond

General Knowledge

88.4
#1

MMLU-Pro

General Knowledge

87.8
#1

LongBenchv2

Summarization

63.2
#1

Multilingual MMLU

Multilingual

88.5
#2

BFCL

Agents and Function Calling

72.9
#3

LMArena

Chat

1447
#3

LiveCodeBench

Code

83.6
#1

MMMU

Multimodal - Vision

85
#1

GPQA-Diamond

General Knowledge

88.4
#1

MMLU-Pro

General Knowledge

87.8
#1

LongBenchv2

Summarization

63.2
#1

Multilingual MMLU

Multilingual

88.5
#2

BFCL

Agents and Function Calling

72.9
#3

LMArena

Chat

1447
#3

LiveCodeBench

Code

83.6
2Kimi K2.5Moonshot

1T-parameter MoE reasoning model with state-of-the-art performance on math, code, and multimodal tasks.

Speed:

Intelligence:

Price: (1M Tokens)

$0.50 / 2.80

Inputs:

ImageText

JSON Mode:

Function Calling:

Benchmarks:

#2

MMMU

Multimodal - Vision

84.3
#1

LiveCodeBench

Code

85
#2

GPQA-Diamond

General Knowledge

87.6
#2

MMLU-Pro

General Knowledge

87.1
#2

LMArena

Chat

1447
#2

WebDevArena

Code

1446
#2

LongBenchv2

Summarization

61
#6

SimpleQA

General Knowledge

36.9
#2

MMMU

Multimodal - Vision

84.3
#1

LiveCodeBench

Code

85
#2

GPQA-Diamond

General Knowledge

87.6
#2

MMLU-Pro

General Knowledge

87.1
#2

LMArena

Chat

1447
#2

WebDevArena

Code

1446
#2

LongBenchv2

Summarization

61
#6

SimpleQA

General Knowledge

36.9

Use case:

Multimodal - Vision