Mistral Large 3
Mistral AI · Mistral Large
Mistral's flagship European multimodal model with long context and competitive enterprise API economics.
Overview
Freshness note: Model capabilities, limits, and pricing can change quickly. This profile is a point-in-time snapshot last verified on February 15, 2026.
Mistral Large 3 is Mistral AI’s flagship large model for enterprise-grade assistant and agent workloads. It is a key European option for teams that want strong capability with EU-aligned vendor strategy and cost structures.
In Mistral’s lineup, Large 3 is the high-capability endpoint, while smaller variants focus on latency and cost efficiency.
Capabilities
Mistral Large 3 is especially strong for:
- Long-context language tasks over large documentation sets.
- Tool and function-calling workflows in production assistants.
- Multimodal interactions, including text-image style workflows.
- Structured enterprise tasks: extraction, synthesis, and procedural guidance.
- Cost-sensitive high-quality generation compared with some frontier-priced alternatives.
It is often used as a “quality tier” in routing setups with Mistral Medium/Small handling bulk traffic.
Technical Details
Mistral model APIs publish:
- 256K max context length for the Large 3 model card.
- Native support for modern chat/completions-style orchestration patterns.
- Strong compatibility with function-calling tool ecosystems.
Mistral’s published model card emphasizes max context length rather than always exposing a separate max-generation ceiling per model. For consistency in this profile schema, maxOutput reflects that published upper-bound context limit.
For teams with strict governance requirements, version pinning and automated regression checks are still important because model aliases can shift behavior over time.
Pricing & Access
Mistral platform pricing for Large 3 (per 1M tokens):
- Input: $0.50
- Output: $1.50
Access options:
- Mistral API (La Plateforme)
- Cloud and enterprise integration paths (including major cloud partners)
At this pricing level, Large 3 can be an attractive default high-capability model for many enterprise workloads that cannot sustain higher frontier token costs.
Best Use Cases
Choose Mistral Large 3 for:
- Enterprise assistants with strict cost-performance constraints.
- EU-oriented deployments requiring strong regional vendor alignment.
- Tool-heavy workflows with structured output requirements.
- Long-context analysis where 256K capacity is operationally useful.
It is less ideal when you need the absolute latest frontier benchmark peak regardless of spend. It is also less ideal for minimal-complexity chat where smaller Mistral variants can deliver similar user value at significantly lower latency and cost.
Comparisons
- GPT-5 (OpenAI): GPT-5 often leads in top-end frontier capability breadth; Mistral Large 3 is frequently more cost-favorable for enterprise throughput.
- Claude Opus 4.6 (Anthropic): Opus is premium and highly capable on hard reasoning; Mistral offers a strong value proposition with lower token rates.
- Gemini 3 Pro Preview (Google): Gemini has broader native multimodal span and 1M+ context; Mistral Large 3 is simpler to position as a cost-efficient high-capability EU alternative.