GPT-5 nano

OpenAI · GPT-5

Ultra-low-cost GPT-5 tier for high-throughput automation and lightweight reasoning tasks.

Type
language
Context
400K tokens
Max Output
128K tokens
Status
current
Input
$0.05/1M tok
Output
$0.4/1M tok
API Access
Yes
License
proprietary
high-throughput low-cost classification automation tool-use
Released August 2025 · Updated February 15, 2026

Overview

Freshness note: Model capabilities, limits, and pricing can change quickly. This profile is a point-in-time snapshot last verified on February 15, 2026.

GPT-5 nano is a budget-first GPT-5 family tier intended for large-scale operational workloads. It is useful when latency and per-request cost dominate, and tasks are structured enough to avoid needing deep multi-step reasoning.

Capabilities

The model works well for classification, routing, normalization, lightweight extraction, and short transformation tasks. It can also support simple coding and templated content tasks when instructions are precise.

Technical Details

GPT-5 nano sits at the high-throughput edge of the GPT-5 lineup. Teams should design prompts and guardrails for determinism and brevity to maximize cost-performance.

Pricing & Access

Available through OpenAI API channels where GPT-5 variants are exposed. Because pricing and limits may evolve, teams should verify current values in official OpenAI pricing references before committing forecast assumptions.

Best Use Cases

Ideal for preprocessing pipelines, event labeling, ticket triage, content normalization, and background automation where very high request volume is expected.

Comparisons

Compared with GPT-5 mini, GPT-5 nano prioritizes cost and throughput over richer reasoning depth. Compared with DeepSeek-Reasoner, nano usually fits deterministic operational flows better than open-ended analysis. Compared with Gemini 2.5 Flash-Lite, tradeoffs are primarily ecosystem, latency, and output-style preferences.