The best LLMs for your use case:

1Qwen3 235B A22BQwen

Hybrid instruct + reasoning model (232Bx22B MoE) optimized for high-throughput, cost-efficient inference and distillation.

Speed:

Intelligence:

Price: (1M Tokens)

$0.20

Inputs:

ImageText

JSON Mode:

Function Calling:

Benchmarks:

#1

BFCL

Agents and Function Calling

70.8
#1

LiveBench

General Knowledge

73.23
#1

EQBench

Creative Writing

1271.6
#1

LiveCodeBench

Code

80.4
#1

Aider Polyglot

Code

59.6
#1

MGSM

Multilingual

92.7
#2

GPQA-Diamond

General Knowledge

70
#2

MMLU-Pro

General Knowledge

83.66
#2

LongBenchv2

Summarization

50.1
#2

Multilingual MMLU

Multilingual

82.8
#3

WebDevArena

Code

1186
#11

LMArena

Chat

45.92
#1

BFCL

Agents and Function Calling

70.8
#1

LiveBench

General Knowledge

73.23
#1

EQBench

Creative Writing

1271.6
#1

LiveCodeBench

Code

80.4
#1

Aider Polyglot

Code

59.6
#1

MGSM

Multilingual

92.7
#2

GPQA-Diamond

General Knowledge

70
#2

MMLU-Pro

General Knowledge

83.66
#2

LongBenchv2

Summarization

50.1
#2

Multilingual MMLU

Multilingual

82.8
#3

WebDevArena

Code

1186
#11

LMArena

Chat

45.92
2Qwen 2.5 72B Instruct TurboQwen

Decoder-only model built for advanced language processing tasks.

Speed:

Intelligence:

Price: (1M Tokens)

$1.20

Inputs:

ImageText

JSON Mode:

Function Calling:

Benchmarks:

#2

BFCL

Agents and Function Calling

63.37
#4

MMLU-Pro

General Knowledge

71.1
#4

EQBench

Creative Writing

701.3
#4

LiveCodeBench

Code

55.5
#4

LongBenchv2

Summarization

43.5
#5

Multilingual MMLU

Multilingual

69.05
#6

LiveBench

General Knowledge

52.3
#6

MGSM

Multilingual

89.5
#7

LMArena

Chat

1257
#2

BFCL

Agents and Function Calling

63.37
#4

MMLU-Pro

General Knowledge

71.1
#4

EQBench

Creative Writing

701.3
#4

LiveCodeBench

Code

55.5
#4

LongBenchv2

Summarization

43.5
#5

Multilingual MMLU

Multilingual

69.05
#6

LiveBench

General Knowledge

52.3
#6

MGSM

Multilingual

89.5
#7

LMArena

Chat

1257

Use case:

Agents and Function Calling