New Finance Agent Benchmark Released
openai/o4-mini-2025-04-16 o4 Mini

Released Date: 4/16/2025

Avg. Accuracy:

77.3%

Latency:

26.58s

Performance by Benchmark

Benchmarks

Accuracy

Rankings

FinanceAgent

36.5%

( 6 / 24 )

CorpFin

70.1%

( 3 / 39 )

CaseLaw

81.1%

( 25 / 62 )

ContractLaw

68.9%

( 15 / 69 )

TaxEval

78.8%

( 3 / 49 )

MortgageTax

77.1%

( 8 / 29 )

Math500

94.2%

( 5 / 45 )

AIME

83.7%

( 6 / 39 )

MGSM

93.4%

( 2 / 43 )

LegalBench

79.0%

( 20 / 67 )

MedQA

96.0%

( 3 / 47 )

GPQA

74.5%

( 6 / 40 )

MMLU Pro

80.6%

( 9 / 40 )

LiveCodeBench

66.5%

( 1 / 40 )

MMMU

79.7%

( 3 / 26 )

Academic Benchmarks
Proprietary Benchmarks (contact us to get access)

Cost Analysis

Input Cost

$1.10 / M Tokens

Output Cost

$4.40 / M Tokens

Input Cost (per char)

$0.63 / M chars

Output Cost (per char)

N/A

Join our mailing list to receive benchmark updates on

Stay up to date as new benchmarks and models are released.