New Finance Agent Benchmark Released
openai/o3-mini-2025-01-31 o3 Mini

OpenAI's most recent small reasoning model, providing high intelligence at the same cost and latency targets of o1-mini.

Released Date: 1/31/2025

Avg. Accuracy:

72.2%

Latency:

53.65s

Performance by Benchmark

Benchmarks

Accuracy

Rankings

FinanceAgent

12.7%

( 17 / 24 )

CorpFin

55.7%

( 22 / 39 )

CaseLaw

78.5%

( 32 / 62 )

ContractLaw

69.3%

( 13 / 69 )

TaxEval

73.9%

( 21 / 49 )

Math500

91.8%

( 8 / 45 )

AIME

86.5%

( 1 / 39 )

MGSM

91.6%

( 14 / 43 )

LegalBench

70.9%

( 46 / 67 )

MedQA

94.8%

( 4 / 47 )

GPQA

75.0%

( 5 / 40 )

MMLU Pro

78.7%

( 17 / 40 )

LiveCodeBench

59.8%

( 5 / 40 )

Academic Benchmarks
Proprietary Benchmarks (contact us to get access)

Cost Analysis

Input Cost

$1.10 / M Tokens

Output Cost

$4.40 / M Tokens

Input Cost (per char)

$0.29 / M chars

Output Cost (per char)

N/A

Overview

O3 Mini represents OpenAI’s latest model in its reasoning series. Like o1-mini, it is a smallar, more cost-efficient model - OpenAI clames it is also faster than o1-mini.

New in this model, people can select the “level” of reasoning they want the model to do - low, medium, or high.

Key Specifications

  • Context Window: 200,000 tokens
  • Max Output Tokens: 100,000 tokens
  • Training Cutoff: October 2023
  • Pricing:
    • Input: $1.10 / 1M tokens
    • Output: $4.40 / 1M tokens
Join our mailing list to receive benchmark updates on

Stay up to date as new benchmarks and models are released.