Back
Mistral AI
Large language model platform that provides conversational AI assistants and proprietary large language models for developers and enterprises

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C

Text link

Bold text

Emphasis

Superscript

Subscript

<ul><li><strong>Pricing Model:</strong> Hybrid | Per-seat subscription (Le Chat) + Token-based usage (API)</li><li><strong>Packaging Model:</strong> Freemium | Good/Better/Best (Free, Pro, Team) with separate enterprise API pay-as-you-go</li><li><strong>Credit Model:</strong> Prepaid credits for API</li></ul>
February 2, 2026
Last update:
<h3>Product Overview</h3><p>Mistral AI provides large language models through both subscription-based chat interfaces and usage-based APIs. Founded in 2023, the company has raised €2.785 billion across three funding rounds to become Europe&#039;s highest-valued AI startup at €11.7 billion valuation. Mistral operates a dual revenue model: Le Chat subscriptions for end users ($0-$24.99/user/month) and pay-per-token API access for developers ($0.028-$6.00 per million tokens). The platform offers specialized models for coding (Devstral), vision (Pixtral), and reasoning (Magistral) tasks, with deployment options spanning managed cloud, third-party platforms, and self-hosted configurations.</p>
<h3>Pricing Snapshot</h3><div class="tableResponsive"><table cellpadding="6" cellspacing="0"><tr><th>Model</th><th>Input Price</th><th>Output Price</th><th>Context Window</th><th>Status</th></tr><tr><td>Mistral Large 3</td><td>$2.00/M tokens</td><td>$6.00/M tokens</td><td>256K tokens</td><td>Current flagship</td></tr><tr><td>Devstral 2</td><td>$0.40/M tokens</td><td>$2.00/M tokens</td><td>256K tokens</td><td>Coding specialist</td></tr><tr><td>Mistral Medium 3</td><td>$0.40/M tokens</td><td>$2.00/M tokens</td><td>32K-64K tokens</td><td>Mid-tier</td></tr><tr><td>Codestral</td><td>$0.30/M tokens</td><td>$0.90/M tokens</td><td>Not specified</td><td>Legacy coding</td></tr><tr><td>Mistral Small 3</td><td>$0.20/M tokens</td><td>$0.60/M tokens</td><td>128K tokens</td><td>Cost-optimized</td></tr><tr><td>Mistral 7B</td><td>$0.028/M tokens</td><td>$0.054/M tokens</td><td>Not specified</td><td>Economy tier</td></tr></table></div>
<h3>Key Features & Capabilities</h3><p>Mistral AI provides a platform spanning consumer chat interfaces, developer APIs, and enterprise deployments with specialized models and flexible deployment options.</p><ul><li>Core Product Offerings: Le Chat AI Assistant with subscription tiers, API Platform with token-based billing, specialized models (Devstral for coding, Pixtral for vision, Magistral for reasoning), and fine-tuning capabilities</li><li>Deployment Options: Managed cloud via AI Studio, third-party integrations (Azure, AWS, Google Cloud, Snowflake, IBM, Outscale), and self-deployment (vLLM, TensorRT-LLM, TGI)</li><li>Enterprise Features: SAML SSO, audit logs and data export, private cloud deployments, custom model training, and white labeling</li><li>Model Specialization: Range from economy tier (Mistral 7B) to flagship (Mistral Large 3) with coding and vision specialists</li></ul>
<h3>Pricing Model Analysis</h3><p>Mistral AI employs a dual-revenue model combining subscription access with consumption-based API pricing.</p><div class="tableResponsive"><table cellpadding="6" cellspacing="0"><tr><th>Metric Type</th><th>What Measured</th><th>Why It Matters</th></tr><tr><td>Value Metric</td><td>Performance per dollar spent</td><td>Developers can benchmark cost-per-task against alternatives to make informed model selection decisions</td></tr><tr><td>Usage Metric</td><td>API tokens consumed (input/output)</td><td>Aligns costs with actual model usage and complexity</td></tr><tr><td>Billable Metric</td><td>Threshold-based cumulative spending</td><td>Automatic rate limit increases without manual upgrades</td></tr></table></div>
<h3>Pricing Evolution Timeline</h3><div class="tableResponsive"><table cellpadding="6" cellspacing="0"><tr><th>Date</th><th>Milestone</th><th>Source</th></tr><tr><td>Feb 26, 2024</td><td>Mistral Large Launch - $8 input / $24 output per million tokens</td><td><a href='https://techcrunch.com/2024/02/26/mistral-ai-releases-new-model-to-rival-gpt-4-and-other-top-closed-models/' target='_blank'>TechCrunch Launch Coverage <svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16" fill="none"> <path d="M14 6.5C14 6.63261 13.9473 6.75979 13.8536 6.85355C13.7598 6.94732 13.6326 7 13.5 7C13.3674 7 13.2402 6.94732 13.1464 6.85355C13.0527 6.75979 13 6.63261 13 6.5V3.7075L8.85437 7.85375C8.76055 7.94757 8.63331 8.00028 8.50062 8.00028C8.36794 8.00028 8.2407 7.94757 8.14688 7.85375C8.05305 7.75993 8.00035 7.63268 8.00035 7.5C8.00035 7.36732 8.05305 7.24007 8.14688 7.14625L12.2925 3H9.5C9.36739 3 9.24021 2.94732 9.14645 2.85355C9.05268 2.75979 9 2.63261 9 2.5C9 2.36739 9.05268 2.24021 9.14645 2.14645C9.24021 2.05268 9.36739 2 9.5 2H13.5C13.6326 2 13.7598 2.05268 13.8536 2.14645C13.9473 2.24021 14 2.36739 14 2.5V6.5ZM11.5 8C11.3674 8 11.2402 8.05268 11.1464 8.14645C11.0527 8.24021 11 8.36739 11 8.5V13H3V5H7.5C7.63261 5 7.75979 4.94732 7.85355 4.85355C7.94732 4.75979 8 4.63261 8 4.5C8 4.36739 7.94732 4.24021 7.85355 4.14645C7.75979 4.05268 7.63261 4 7.5 4H3C2.73478 4 2.48043 4.10536 2.29289 4.29289C2.10536 4.48043 2 4.73478 2 5V13C2 13.2652 2.10536 13.5196 2.29289 13.7071C2.48043 13.8946 2.73478 14 3 14H11C11.2652 14 11.5196 13.8946 11.7071 13.7071C11.8946 13.5196 12 13.2652 12 13V8.5C12 8.36739 11.9473 8.24021 11.8536 8.14645C11.7598 8.05268 11.6326 8 11.5 8Z" fill="#95988B"/> </svg></a></td></tr><tr><td>Jun 11, 2024</td><td>Series B Funding - €600M led by General Catalyst</td><td><a href='https://techcrunch.com/2024/06/11/paris-based-ai-startup-mistral-ai-raises-640-million/' target='_blank'>TechCrunch Series B <svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16" fill="none"> <path d="M14 6.5C14 6.63261 13.9473 6.75979 13.8536 6.85355C13.7598 6.94732 13.6326 7 13.5 7C13.3674 7 13.2402 6.94732 13.1464 6.85355C13.0527 6.75979 13 6.63261 13 6.5V3.7075L8.85437 7.85375C8.76055 7.94757 8.63331 8.00028 8.50062 8.00028C8.36794 8.00028 8.2407 7.94757 8.14688 7.85375C8.05305 7.75993 8.00035 7.63268 8.00035 7.5C8.00035 7.36732 8.05305 7.24007 8.14688 7.14625L12.2925 3H9.5C9.36739 3 9.24021 2.94732 9.14645 2.85355C9.05268 2.75979 9 2.63261 9 2.5C9 2.36739 9.05268 2.24021 9.14645 2.14645C9.24021 2.05268 9.36739 2 9.5 2H13.5C13.6326 2 13.7598 2.05268 13.8536 2.14645C13.9473 2.24021 14 2.36739 14 2.5V6.5ZM11.5 8C11.3674 8 11.2402 8.05268 11.1464 8.14645C11.0527 8.24021 11 8.36739 11 8.5V13H3V5H7.5C7.63261 5 7.75979 4.94732 7.85355 4.85355C7.94732 4.75979 8 4.63261 8 4.5C8 4.36739 7.94732 4.24021 7.85355 4.14645C7.75979 4.05268 7.63261 4 7.5 4H3C2.73478 4 2.48043 4.10536 2.29289 4.29289C2.10536 4.48043 2 4.73478 2 5V13C2 13.2652 2.10536 13.5196 2.29289 13.7071C2.48043 13.8946 2.73478 14 3 14H11C11.2652 14 11.5196 13.8946 11.7071 13.7071C11.8946 13.5196 12 13.2652 12 13V8.5C12 8.36739 11.9473 8.24021 11.8536 8.14645C11.7598 8.05268 11.6326 8 11.5 8Z" fill="#95988B"/> </svg></a></td></tr><tr><td>Sep 24, 2024</td><td>Major Price Reduction - Mistral Large: 75% cut to $2/$6; Free API tier launch</td><td><a href='https://mistral.ai/news/september-24-release/' target='_blank'>Mistral AI September Release <svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16" fill="none"> <path d="M14 6.5C14 6.63261 13.9473 6.75979 13.8536 6.85355C13.7598 6.94732 13.6326 7 13.5 7C13.3674 7 13.2402 6.94732 13.1464 6.85355C13.0527 6.75979 13 6.63261 13 6.5V3.7075L8.85437 7.85375C8.76055 7.94757 8.63331 8.00028 8.50062 8.00028C8.36794 8.00028 8.2407 7.94757 8.14688 7.85375C8.05305 7.75993 8.00035 7.63268 8.00035 7.5C8.00035 7.36732 8.05305 7.24007 8.14688 7.14625L12.2925 3H9.5C9.36739 3 9.24021 2.94732 9.14645 2.85355C9.05268 2.75979 9 2.63261 9 2.5C9 2.36739 9.05268 2.24021 9.14645 2.14645C9.24021 2.05268 9.36739 2 9.5 2H13.5C13.6326 2 13.7598 2.05268 13.8536 2.14645C13.9473 2.24021 14 2.36739 14 2.5V6.5ZM11.5 8C11.3674 8 11.2402 8.05268 11.1464 8.14645C11.0527 8.24021 11 8.36739 11 8.5V13H3V5H7.5C7.63261 5 7.75979 4.94732 7.85355 4.85355C7.94732 4.75979 8 4.63261 8 4.5C8 4.36739 7.94732 4.24021 7.85355 4.14645C7.75979 4.05268 7.63261 4 7.5 4H3C2.73478 4 2.48043 4.10536 2.29289 4.29289C2.10536 4.48043 2 4.73478 2 5V13C2 13.2652 2.10536 13.5196 2.29289 13.7071C2.48043 13.8946 2.73478 14 3 14H11C11.2652 14 11.5196 13.8946 11.7071 13.7071C11.8946 13.5196 12 13.2652 12 13V8.5C12 8.36739 11.9473 8.24021 11.8536 8.14645C11.7598 8.05268 11.6326 8 11.5 8Z" fill="#95988B"/> </svg></a></td></tr><tr><td>Feb 2025</td><td>Le Chat Subscription Launch - Pro ($14.99), Team ($24.99/user), Enterprise (custom)</td><td><a href='https://mistral.ai/news/le-chat-launch/' target='_blank'>Mistral AI Le Chat Launch <svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16" fill="none"> <path d="M14 6.5C14 6.63261 13.9473 6.75979 13.8536 6.85355C13.7598 6.94732 13.6326 7 13.5 7C13.3674 7 13.2402 6.94732 13.1464 6.85355C13.0527 6.75979 13 6.63261 13 6.5V3.7075L8.85437 7.85375C8.76055 7.94757 8.63331 8.00028 8.50062 8.00028C8.36794 8.00028 8.2407 7.94757 8.14688 7.85375C8.05305 7.75993 8.00035 7.63268 8.00035 7.5C8.00035 7.36732 8.05305 7.24007 8.14688 7.14625L12.2925 3H9.5C9.36739 3 9.24021 2.94732 9.14645 2.85355C9.05268 2.75979 9 2.63261 9 2.5C9 2.36739 9.05268 2.24021 9.14645 2.14645C9.24021 2.05268 9.36739 2 9.5 2H13.5C13.6326 2 13.7598 2.05268 13.8536 2.14645C13.9473 2.24021 14 2.36739 14 2.5V6.5ZM11.5 8C11.3674 8 11.2402 8.05268 11.1464 8.14645C11.0527 8.24021 11 8.36739 11 8.5V13H3V5H7.5C7.63261 5 7.75979 4.94732 7.85355 4.85355C7.94732 4.75979 8 4.63261 8 4.5C8 4.36739 7.94732 4.24021 7.85355 4.14645C7.75979 4.05268 7.63261 4 7.5 4H3C2.73478 4 2.48043 4.10536 2.29289 4.29289C2.10536 4.48043 2 4.73478 2 5V13C2 13.2652 2.10536 13.5196 2.29289 13.7071C2.48043 13.8946 2.73478 14 3 14H11C11.2652 14 11.5196 13.8946 11.7071 13.7071C11.8946 13.5196 12 13.2652 12 13V8.5C12 8.36739 11.9473 8.24021 11.8536 8.14645C11.7598 8.05268 11.6326 8 11.5 8Z" fill="#95988B"/> </svg></a></td></tr><tr><td>May 7, 2025</td><td>Mistral Medium 3 Launch - $0.40 input / $2.00 output</td><td><a href='https://mistral.ai/news/mistral-medium-3/' target='_blank'>Mistral AI Medium 3 <svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16" fill="none"> <path d="M14 6.5C14 6.63261 13.9473 6.75979 13.8536 6.85355C13.7598 6.94732 13.6326 7 13.5 7C13.3674 7 13.2402 6.94732 13.1464 6.85355C13.0527 6.75979 13 6.63261 13 6.5V3.7075L8.85437 7.85375C8.76055 7.94757 8.63331 8.00028 8.50062 8.00028C8.36794 8.00028 8.2407 7.94757 8.14688 7.85375C8.05305 7.75993 8.00035 7.63268 8.00035 7.5C8.00035 7.36732 8.05305 7.24007 8.14688 7.14625L12.2925 3H9.5C9.36739 3 9.24021 2.94732 9.14645 2.85355C9.05268 2.75979 9 2.63261 9 2.5C9 2.36739 9.05268 2.24021 9.14645 2.14645C9.24021 2.05268 9.36739 2 9.5 2H13.5C13.6326 2 13.7598 2.05268 13.8536 2.14645C13.9473 2.24021 14 2.36739 14 2.5V6.5ZM11.5 8C11.3674 8 11.2402 8.05268 11.1464 8.14645C11.0527 8.24021 11 8.36739 11 8.5V13H3V5H7.5C7.63261 5 7.75979 4.94732 7.85355 4.85355C7.94732 4.75979 8 4.63261 8 4.5C8 4.36739 7.94732 4.24021 7.85355 4.14645C7.75979 4.05268 7.63261 4 7.5 4H3C2.73478 4 2.48043 4.10536 2.29289 4.29289C2.10536 4.48043 2 4.73478 2 5V13C2 13.2652 2.10536 13.5196 2.29289 13.7071C2.48043 13.8946 2.73478 14 3 14H11C11.2652 14 11.5196 13.8946 11.7071 13.7071C11.8946 13.5196 12 13.2652 12 13V8.5C12 8.36739 11.9473 8.24021 11.8536 8.14645C11.7598 8.05268 11.6326 8 11.5 8Z" fill="#95988B"/> </svg></a></td></tr><tr><td>Sep 9, 2025</td><td>Series C Mega-Round - €1.7B, €11.7B valuation</td><td><a href='https://mistral.ai/news/series-c/' target='_blank'>Mistral AI Series C <svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16" fill="none"> <path d="M14 6.5C14 6.63261 13.9473 6.75979 13.8536 6.85355C13.7598 6.94732 13.6326 7 13.5 7C13.3674 7 13.2402 6.94732 13.1464 6.85355C13.0527 6.75979 13 6.63261 13 6.5V3.7075L8.85437 7.85375C8.76055 7.94757 8.63331 8.00028 8.50062 8.00028C8.36794 8.00028 8.2407 7.94757 8.14688 7.85375C8.05305 7.75993 8.00035 7.63268 8.00035 7.5C8.00035 7.36732 8.05305 7.24007 8.14688 7.14625L12.2925 3H9.5C9.36739 3 9.24021 2.94732 9.14645 2.85355C9.05268 2.75979 9 2.63261 9 2.5C9 2.36739 9.05268 2.24021 9.14645 2.14645C9.24021 2.05268 9.36739 2 9.5 2H13.5C13.6326 2 13.7598 2.05268 13.8536 2.14645C13.9473 2.24021 14 2.36739 14 2.5V6.5ZM11.5 8C11.3674 8 11.2402 8.05268 11.1464 8.14645C11.0527 8.24021 11 8.36739 11 8.5V13H3V5H7.5C7.63261 5 7.75979 4.94732 7.85355 4.85355C7.94732 4.75979 8 4.63261 8 4.5C8 4.36739 7.94732 4.24021 7.85355 4.14645C7.75979 4.05268 7.63261 4 7.5 4H3C2.73478 4 2.48043 4.10536 2.29289 4.29289C2.10536 4.48043 2 4.73478 2 5V13C2 13.2652 2.10536 13.5196 2.29289 13.7071C2.48043 13.8946 2.73478 14 3 14H11C11.2652 14 11.5196 13.8946 11.7071 13.7071C11.8946 13.5196 12 13.2652 12 13V8.5C12 8.36739 11.9473 8.24021 11.8536 8.14645C11.7598 8.05268 11.6326 8 11.5 8Z" fill="#95988B"/> </svg></a></td></tr><tr><td>Dec 2, 2025</td><td>Mistral 3 Family Launch - Enterprise platform evolution</td><td><a href='https://mistral.ai/news/mistral-3/' target='_blank'>Mistral AI Mistral 3 <svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16" fill="none"> <path d="M14 6.5C14 6.63261 13.9473 6.75979 13.8536 6.85355C13.7598 6.94732 13.6326 7 13.5 7C13.3674 7 13.2402 6.94732 13.1464 6.85355C13.0527 6.75979 13 6.63261 13 6.5V3.7075L8.85437 7.85375C8.76055 7.94757 8.63331 8.00028 8.50062 8.00028C8.36794 8.00028 8.2407 7.94757 8.14688 7.85375C8.05305 7.75993 8.00035 7.63268 8.00035 7.5C8.00035 7.36732 8.05305 7.24007 8.14688 7.14625L12.2925 3H9.5C9.36739 3 9.24021 2.94732 9.14645 2.85355C9.05268 2.75979 9 2.63261 9 2.5C9 2.36739 9.05268 2.24021 9.14645 2.14645C9.24021 2.05268 9.36739 2 9.5 2H13.5C13.6326 2 13.7598 2.05268 13.8536 2.14645C13.9473 2.24021 14 2.36739 14 2.5V6.5ZM11.5 8C11.3674 8 11.2402 8.05268 11.1464 8.14645C11.0527 8.24021 11 8.36739 11 8.5V13H3V5H7.5C7.63261 5 7.75979 4.94732 7.85355 4.85355C7.94732 4.75979 8 4.63261 8 4.5C8 4.36739 7.94732 4.24021 7.85355 4.14645C7.75979 4.05268 7.63261 4 7.5 4H3C2.73478 4 2.48043 4.10536 2.29289 4.29289C2.10536 4.48043 2 4.73478 2 5V13C2 13.2652 2.10536 13.5196 2.29289 13.7071C2.48043 13.8946 2.73478 14 3 14H11C11.2652 14 11.5196 13.8946 11.7071 13.7071C11.8946 13.5196 12 13.2652 12 13V8.5C12 8.36739 11.9473 8.24021 11.8536 8.14645C11.7598 8.05268 11.6326 8 11.5 8Z" fill="#95988B"/> </svg></a></td></tr></table></div>
<h3>Customer Sentiment Highlights</h3><ul><li>“Cost efficiency for production workloads. &quot;I switched to mistral-3-medium-0525 a few months back... It&#039;s been insanely fast, cheap, reliable, and follows formatting instructions to the letter. I was (and still am) super super impressed.&quot; — Production User”<b> <span class="pricingHiphenSymb"> - </span>Hacker News</b></li><li>“Competitive API pricing. &quot;The new Mistral Small 3 API model is $0.10/$0.30. For comparison, GPT-4o-mini is $0.15/$0.60.&quot; — Developer”<b> <span class="pricingHiphenSymb"> - </span>Hacker News</b></li><li>“Batch API value. &quot;For simple tasks with pre-defined scope (such as categorization, summarization, etc.) they are the option I choose. I use mistral-small with batch API and it&#039;s probably the best cost-efficient option out there.&quot; — Developer”<b> <span class="pricingHiphenSymb"> - </span>Hacker News</b></li><li>“Mid-tier positioning. &quot;Mistral-medium is really impressive and sits perfectly sandwiched between GPT-3.5 and GPT-4. In my (limited) experience it&#039;s a great choice for anyone that isn&#039;t able to get consistency or quality out of GPT-3.5.&quot; — Developer”<b> <span class="pricingHiphenSymb"> - </span>Reddit r/LocalLLaMA</b></li></ul>
Metronome’s Take
<p>Mistral AI operates a dual-revenue model that combines per-user subscriptions for its chat interfaces with consumption-based API billing for developers. Subscription plans provide predictable monthly access for business users, while the API is priced on a per-token basis with separate input and output rates. This separation allows developers to adopt Mistral through pure usage-based pricing, while non-technical users can engage through fixed subscriptions—resulting in two distinct customer journeys and billing experiences.</p>
<p><strong>Recommendation:</strong> This infrastructure-style, usage-based pricing model aligns well with developer-focused AI platforms. The combination of automatic scaling, model-tier differentiation, and asymmetric token pricing follows established best practices in the LLM API market. Developers building production applications benefit from predictable unit economics and low operational overhead, while organizations seeking long-term budget certainty may need to account for ongoing price evolution as models and pricing continue to mature.</p>
<h4>Key Insights</h4><ul><li> <strong>Asymmetric Input/Output Token Pricing:</strong> Mistral prices output tokens materially higher than input tokens across its API models, reflecting the greater computational cost of generation relative to prompt ingestion. <p><strong>Benefit:</strong> Applications with large context windows but limited generation—such as document analysis or retrieval-augmented workflows—incur lower relative costs than chat-heavy or generative use cases.</p></li><li> <strong>Threshold-Based Automatic Scaling:</strong> API rate limits increase automatically as customer spend grows, without requiring manual plan upgrades or contract renegotiation. <p><strong>Benefit:</strong> Developers can scale usage smoothly as workloads grow, avoiding operational friction associated with capacity planning or tier management.</p></li><li> <strong>Tiered Model Specialization:</strong> Mistral offers multiple model families optimized for different workloads, ranging from lightweight, low-cost models for classification and extraction to larger models designed for coding and complex reasoning. <p><strong>Benefit:</strong> Teams can align model choice with task complexity—routing simple workloads to lower-cost models and reserving higher-end models for tasks that require deeper reasoning—while staying within a single vendor ecosystem.</p></li></ul>

The Pricing
Experimentation
Playbook

Find your ideal pricing model

Answer 8 quick questions to discover which best fits how your customers get value from your product.

Find your model