Back
Mistral - API
Enterprise-grade LLM models for text generation, code completion, embeddings, and reasoning tasks through flexible API and deployment options

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C

Text link

Bold text

Emphasis

Superscript

Subscript

<ul><li><strong>Pricing Model:</strong> Usage-based (pay-per-token)</li><li><strong>Packaging Model:</strong> Tiered - model-based portfolio (Small/Medium/Large) with Free tier</li><li><strong>Credit Model:</strong> N/A</li></ul>
February 4, 2026
Last update:
<h3>Product Overview</h3><p>Mistral AI is a French AI company founded by former DeepMind and Meta researchers that provides large language models through both API access and open-source releases. The company offers a comprehensive portfolio of production-ready language models ranging from lightweight 7B parameter models to flagship 675B parameter sparse mixture-of-experts architectures. Mistral&#039;s unique value proposition combines competitive per-token pricing with flexible deployment options including cloud APIs, on-premises installations, edge computing, and hybrid architectures. The platform emphasizes European data sovereignty, cost efficiency, and transparency while serving both developers through pay-as-you-go APIs and enterprises through custom deployments.</p>
<h3>Pricing Snapshot</h3><div class="tableResponsive"><table cellpadding="6" cellspacing="0"><tr><th>Model</th><th>Input Price</th><th>Output Price</th><th>Context Window</th><th>Status</th></tr><tr><td>Mistral Large 3</td><td>$2.00/1M tokens</td><td>$5.00/1M tokens</td><td>256K tokens</td><td>Production</td></tr><tr><td>Mistral Medium 3.1</td><td>$0.40/1M tokens</td><td>$2.00/1M tokens</td><td>128K tokens</td><td>Production</td></tr><tr><td>Mistral Small 3.2</td><td>$0.10/1M tokens</td><td>$0.30/1M tokens</td><td>128K tokens</td><td>Production</td></tr><tr><td>Mistral Nemo</td><td>$0.15/1M tokens</td><td>$0.15/1M tokens</td><td>128K tokens</td><td>Production</td></tr><tr><td>Pixtral 12B (Vision)</td><td>$0.15/1M tokens</td><td>$0.15/1M tokens</td><td>128K tokens</td><td>Production</td></tr><tr><td>Devstral 2</td><td>$0.40/1M tokens</td><td>$2.00/1M tokens</td><td>256K tokens</td><td>Free (promotional)</td></tr><tr><td>Devstral Small 2</td><td>$0.10/1M tokens</td><td>$0.30/1M tokens</td><td>256K tokens</td><td>Production</td></tr><tr><td>Embeddings</td><td>$0.01/1M tokens</td><td>N/A</td><td>N/A</td><td>Production</td></tr></table></div>
<h3>Key Features & Capabilities</h3><p>Mistral AI provides a comprehensive portfolio of language models with flexible deployment options spanning cloud APIs, on-premises installations, and edge computing, serving use cases from lightweight edge processing to enterprise-scale deployments.</p><ul><li>Model Portfolio: Flagship models include Mistral Large 3 (41B active parameters from 675B total in sparse mixture-of-experts architecture), Medium 3.1, and Small 3.2, plus specialized models like Codestral for code completion, Devstral 2 for software engineering, OCR 3 for document processing, Voxtral Mini for transcription, and multimodal Pixtral 12B for text and image processing, with edge-optimized Ministral 3 series (3B, 8B, 14B) for local deployment.</li><li>Technical Capabilities: Context windows ranging from 4K to 256K tokens depending on model, multilingual support across 40+ languages, function calling and tool integration, real-time streaming responses, and custom fine-tuning and model customization options.</li><li>Deployment Flexibility: Full range of deployment options including managed cloud API through La Plateforme, native multi-cloud integrations with AWS, Azure, and Google Cloud, on-premises installations for data sovereignty requirements, edge computing with lightweight models for local processing, and hybrid architectures combining cloud and local deployment.</li><li>Enterprise Features: Comprehensive enterprise support including Single Sign-On (SSO) integration, organization-level rate limits and billing, custom model training through &quot;My Tailor is Mistral&quot; program, white labeling and custom UI options, and GDPR compliance with European data residency.</li></ul>
<h3>Pricing Model Analysis</h3><p>Mistral AI uses a pure usage-based pricing model where customers pay per token processed, with separate rates for input and output tokens across different model tiers that correspond to capability levels.</p><div class="tableResponsive"><table cellpadding="6" cellspacing="0"><tr><th>Metric Type</th><th>What Measured</th><th>Why It Matters</th></tr><tr><td>Value Metric</td><td>AI model intelligence and performance</td><td>Customers pay for model capability tier (Small/Medium/Large)</td></tr><tr><td>Usage Metric</td><td>Tokens processed (input and output)</td><td>Direct correlation with actual usage and value delivered</td></tr><tr><td>Billable Metric</td><td>Tokens consumed per API call</td><td>Precise measurement enables cost control and optimization</td></tr></table></div>
<h3>Pricing Evolution Timeline</h3><div class="tableResponsive"><table cellpadding="6" cellspacing="0"><tr><th>Date</th><th>Milestone</th><th>Source</th></tr><tr><td>Sep 27, 2023</td><td>Mistral 7B launched as free open-source model</td><td><a href='https://mistral.ai/news/announcing-mistral-7b' target='_blank'>Mistral AI Blog <svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16" fill="none"> <path d="M14 6.5C14 6.63261 13.9473 6.75979 13.8536 6.85355C13.7598 6.94732 13.6326 7 13.5 7C13.3674 7 13.2402 6.94732 13.1464 6.85355C13.0527 6.75979 13 6.63261 13 6.5V3.7075L8.85437 7.85375C8.76055 7.94757 8.63331 8.00028 8.50062 8.00028C8.36794 8.00028 8.2407 7.94757 8.14688 7.85375C8.05305 7.75993 8.00035 7.63268 8.00035 7.5C8.00035 7.36732 8.05305 7.24007 8.14688 7.14625L12.2925 3H9.5C9.36739 3 9.24021 2.94732 9.14645 2.85355C9.05268 2.75979 9 2.63261 9 2.5C9 2.36739 9.05268 2.24021 9.14645 2.14645C9.24021 2.05268 9.36739 2 9.5 2H13.5C13.6326 2 13.7598 2.05268 13.8536 2.14645C13.9473 2.24021 14 2.36739 14 2.5V6.5ZM11.5 8C11.3674 8 11.2402 8.05268 11.1464 8.14645C11.0527 8.24021 11 8.36739 11 8.5V13H3V5H7.5C7.63261 5 7.75979 4.94732 7.85355 4.85355C7.94732 4.75979 8 4.63261 8 4.5C8 4.36739 7.94732 4.24021 7.85355 4.14645C7.75979 4.05268 7.63261 4 7.5 4H3C2.73478 4 2.48043 4.10536 2.29289 4.29289C2.10536 4.48043 2 4.73478 2 5V13C2 13.2652 2.10536 13.5196 2.29289 13.7071C2.48043 13.8946 2.73478 14 3 14H11C11.2652 14 11.5196 13.8946 11.7071 13.7071C11.8946 13.5196 12 13.2652 12 13V8.5C12 8.36739 11.9473 8.24021 11.8536 8.14645C11.7598 8.05268 11.6326 8 11.5 8Z" fill="#95988B"/> </svg></a></td></tr><tr><td>Dec 11, 2023</td><td>Mixtral 8x7B released under Apache 2.0 license</td><td><a href='https://mistral.ai/news/mixtral-of-experts' target='_blank'>Mistral AI Blog <svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16" fill="none"> <path d="M14 6.5C14 6.63261 13.9473 6.75979 13.8536 6.85355C13.7598 6.94732 13.6326 7 13.5 7C13.3674 7 13.2402 6.94732 13.1464 6.85355C13.0527 6.75979 13 6.63261 13 6.5V3.7075L8.85437 7.85375C8.76055 7.94757 8.63331 8.00028 8.50062 8.00028C8.36794 8.00028 8.2407 7.94757 8.14688 7.85375C8.05305 7.75993 8.00035 7.63268 8.00035 7.5C8.00035 7.36732 8.05305 7.24007 8.14688 7.14625L12.2925 3H9.5C9.36739 3 9.24021 2.94732 9.14645 2.85355C9.05268 2.75979 9 2.63261 9 2.5C9 2.36739 9.05268 2.24021 9.14645 2.14645C9.24021 2.05268 9.36739 2 9.5 2H13.5C13.6326 2 13.7598 2.05268 13.8536 2.14645C13.9473 2.24021 14 2.36739 14 2.5V6.5ZM11.5 8C11.3674 8 11.2402 8.05268 11.1464 8.14645C11.0527 8.24021 11 8.36739 11 8.5V13H3V5H7.5C7.63261 5 7.75979 4.94732 7.85355 4.85355C7.94732 4.75979 8 4.63261 8 4.5C8 4.36739 7.94732 4.24021 7.85355 4.14645C7.75979 4.05268 7.63261 4 7.5 4H3C2.73478 4 2.48043 4.10536 2.29289 4.29289C2.10536 4.48043 2 4.73478 2 5V13C2 13.2652 2.10536 13.5196 2.29289 13.7071C2.48043 13.8946 2.73478 14 3 14H11C11.2652 14 11.5196 13.8946 11.7071 13.7071C11.8946 13.5196 12 13.2652 12 13V8.5C12 8.36739 11.9473 8.24021 11.8536 8.14645C11.7598 8.05268 11.6326 8 11.5 8Z" fill="#95988B"/> </svg></a></td></tr><tr><td>Feb 26, 2024</td><td>First paid API launch: Mistral Large at $8/$24 per million tokens</td><td><a href='https://techcrunch.com/2024/02/26/mistral-ai-releases-new-model-to-rival-gpt-4-and-its-own-chat-assistant/' target='_blank'>TechCrunch <svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16" fill="none"> <path d="M14 6.5C14 6.63261 13.9473 6.75979 13.8536 6.85355C13.7598 6.94732 13.6326 7 13.5 7C13.3674 7 13.2402 6.94732 13.1464 6.85355C13.0527 6.75979 13 6.63261 13 6.5V3.7075L8.85437 7.85375C8.76055 7.94757 8.63331 8.00028 8.50062 8.00028C8.36794 8.00028 8.2407 7.94757 8.14688 7.85375C8.05305 7.75993 8.00035 7.63268 8.00035 7.5C8.00035 7.36732 8.05305 7.24007 8.14688 7.14625L12.2925 3H9.5C9.36739 3 9.24021 2.94732 9.14645 2.85355C9.05268 2.75979 9 2.63261 9 2.5C9 2.36739 9.05268 2.24021 9.14645 2.14645C9.24021 2.05268 9.36739 2 9.5 2H13.5C13.6326 2 13.7598 2.05268 13.8536 2.14645C13.9473 2.24021 14 2.36739 14 2.5V6.5ZM11.5 8C11.3674 8 11.2402 8.05268 11.1464 8.14645C11.0527 8.24021 11 8.36739 11 8.5V13H3V5H7.5C7.63261 5 7.75979 4.94732 7.85355 4.85355C7.94732 4.75979 8 4.63261 8 4.5C8 4.36739 7.94732 4.24021 7.85355 4.14645C7.75979 4.05268 7.63261 4 7.5 4H3C2.73478 4 2.48043 4.10536 2.29289 4.29289C2.10536 4.48043 2 4.73478 2 5V13C2 13.2652 2.10536 13.5196 2.29289 13.7071C2.48043 13.8946 2.73478 14 3 14H11C11.2652 14 11.5196 13.8946 11.7071 13.7071C11.8946 13.5196 12 13.2652 12 13V8.5C12 8.36739 11.9473 8.24021 11.8536 8.14645C11.7598 8.05268 11.6326 8 11.5 8Z" fill="#95988B"/> </svg></a></td></tr><tr><td>May 29, 2024</td><td>Codestral coding model introduced</td><td><a href='https://techcrunch.com/2024/05/29/mistral-releases-its-first-generative-ai-model-for-code' target='_blank'>TechCrunch <svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16" fill="none"> <path d="M14 6.5C14 6.63261 13.9473 6.75979 13.8536 6.85355C13.7598 6.94732 13.6326 7 13.5 7C13.3674 7 13.2402 6.94732 13.1464 6.85355C13.0527 6.75979 13 6.63261 13 6.5V3.7075L8.85437 7.85375C8.76055 7.94757 8.63331 8.00028 8.50062 8.00028C8.36794 8.00028 8.2407 7.94757 8.14688 7.85375C8.05305 7.75993 8.00035 7.63268 8.00035 7.5C8.00035 7.36732 8.05305 7.24007 8.14688 7.14625L12.2925 3H9.5C9.36739 3 9.24021 2.94732 9.14645 2.85355C9.05268 2.75979 9 2.63261 9 2.5C9 2.36739 9.05268 2.24021 9.14645 2.14645C9.24021 2.05268 9.36739 2 9.5 2H13.5C13.6326 2 13.7598 2.05268 13.8536 2.14645C13.9473 2.24021 14 2.36739 14 2.5V6.5ZM11.5 8C11.3674 8 11.2402 8.05268 11.1464 8.14645C11.0527 8.24021 11 8.36739 11 8.5V13H3V5H7.5C7.63261 5 7.75979 4.94732 7.85355 4.85355C7.94732 4.75979 8 4.63261 8 4.5C8 4.36739 7.94732 4.24021 7.85355 4.14645C7.75979 4.05268 7.63261 4 7.5 4H3C2.73478 4 2.48043 4.10536 2.29289 4.29289C2.10536 4.48043 2 4.73478 2 5V13C2 13.2652 2.10536 13.5196 2.29289 13.7071C2.48043 13.8946 2.73478 14 3 14H11C11.2652 14 11.5196 13.8946 11.7071 13.7071C11.8946 13.5196 12 13.2652 12 13V8.5C12 8.36739 11.9473 8.24021 11.8536 8.14645C11.7598 8.05268 11.6326 8 11.5 8Z" fill="#95988B"/> </svg></a></td></tr><tr><td>Sep 17, 2024</td><td>Major pricing overhaul: free tier launch + 50%+ price cuts</td><td><a href='https://techcrunch.com/2024/09/17/mistral-launches-a-free-tier-for-developers-to-test-its-ai-models' target='_blank'>TechCrunch <svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16" fill="none"> <path d="M14 6.5C14 6.63261 13.9473 6.75979 13.8536 6.85355C13.7598 6.94732 13.6326 7 13.5 7C13.3674 7 13.2402 6.94732 13.1464 6.85355C13.0527 6.75979 13 6.63261 13 6.5V3.7075L8.85437 7.85375C8.76055 7.94757 8.63331 8.00028 8.50062 8.00028C8.36794 8.00028 8.2407 7.94757 8.14688 7.85375C8.05305 7.75993 8.00035 7.63268 8.00035 7.5C8.00035 7.36732 8.05305 7.24007 8.14688 7.14625L12.2925 3H9.5C9.36739 3 9.24021 2.94732 9.14645 2.85355C9.05268 2.75979 9 2.63261 9 2.5C9 2.36739 9.05268 2.24021 9.14645 2.14645C9.24021 2.05268 9.36739 2 9.5 2H13.5C13.6326 2 13.7598 2.05268 13.8536 2.14645C13.9473 2.24021 14 2.36739 14 2.5V6.5ZM11.5 8C11.3674 8 11.2402 8.05268 11.1464 8.14645C11.0527 8.24021 11 8.36739 11 8.5V13H3V5H7.5C7.63261 5 7.75979 4.94732 7.85355 4.85355C7.94732 4.75979 8 4.63261 8 4.5C8 4.36739 7.94732 4.24021 7.85355 4.14645C7.75979 4.05268 7.63261 4 7.5 4H3C2.73478 4 2.48043 4.10536 2.29289 4.29289C2.10536 4.48043 2 4.73478 2 5V13C2 13.2652 2.10536 13.5196 2.29289 13.7071C2.48043 13.8946 2.73478 14 3 14H11C11.2652 14 11.5196 13.8946 11.7071 13.7071C11.8946 13.5196 12 13.2652 12 13V8.5C12 8.36739 11.9473 8.24021 11.8536 8.14645C11.7598 8.05268 11.6326 8 11.5 8Z" fill="#95988B"/> </svg></a></td></tr><tr><td>May 7, 2025</td><td>Mistral Medium 3 launched at $0.40/$2.00 per million tokens</td><td><a href='https://techcrunch.com/2025/05/07/mistral-claims-its-newest-ai-model-delivers-leading-performance-for-the-price' target='_blank'>TechCrunch <svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16" fill="none"> <path d="M14 6.5C14 6.63261 13.9473 6.75979 13.8536 6.85355C13.7598 6.94732 13.6326 7 13.5 7C13.3674 7 13.2402 6.94732 13.1464 6.85355C13.0527 6.75979 13 6.63261 13 6.5V3.7075L8.85437 7.85375C8.76055 7.94757 8.63331 8.00028 8.50062 8.00028C8.36794 8.00028 8.2407 7.94757 8.14688 7.85375C8.05305 7.75993 8.00035 7.63268 8.00035 7.5C8.00035 7.36732 8.05305 7.24007 8.14688 7.14625L12.2925 3H9.5C9.36739 3 9.24021 2.94732 9.14645 2.85355C9.05268 2.75979 9 2.63261 9 2.5C9 2.36739 9.05268 2.24021 9.14645 2.14645C9.24021 2.05268 9.36739 2 9.5 2H13.5C13.6326 2 13.7598 2.05268 13.8536 2.14645C13.9473 2.24021 14 2.36739 14 2.5V6.5ZM11.5 8C11.3674 8 11.2402 8.05268 11.1464 8.14645C11.0527 8.24021 11 8.36739 11 8.5V13H3V5H7.5C7.63261 5 7.75979 4.94732 7.85355 4.85355C7.94732 4.75979 8 4.63261 8 4.5C8 4.36739 7.94732 4.24021 7.85355 4.14645C7.75979 4.05268 7.63261 4 7.5 4H3C2.73478 4 2.48043 4.10536 2.29289 4.29289C2.10536 4.48043 2 4.73478 2 5V13C2 13.2652 2.10536 13.5196 2.29289 13.7071C2.48043 13.8946 2.73478 14 3 14H11C11.2652 14 11.5196 13.8946 11.7071 13.7071C11.8946 13.5196 12 13.2652 12 13V8.5C12 8.36739 11.9473 8.24021 11.8536 8.14645C11.7598 8.05268 11.6326 8 11.5 8Z" fill="#95988B"/> </svg></a></td></tr><tr><td>Dec 2, 2025</td><td>Mistral 3 family launched with cost-efficiency focus</td><td><a href='https://techcrunch.com/2025/12/02/mistral-closes-in-on-big-ai-rivals-with-mistral-3-open-weight-frontier-and-small-models' target='_blank'>TechCrunch <svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16" fill="none"> <path d="M14 6.5C14 6.63261 13.9473 6.75979 13.8536 6.85355C13.7598 6.94732 13.6326 7 13.5 7C13.3674 7 13.2402 6.94732 13.1464 6.85355C13.0527 6.75979 13 6.63261 13 6.5V3.7075L8.85437 7.85375C8.76055 7.94757 8.63331 8.00028 8.50062 8.00028C8.36794 8.00028 8.2407 7.94757 8.14688 7.85375C8.05305 7.75993 8.00035 7.63268 8.00035 7.5C8.00035 7.36732 8.05305 7.24007 8.14688 7.14625L12.2925 3H9.5C9.36739 3 9.24021 2.94732 9.14645 2.85355C9.05268 2.75979 9 2.63261 9 2.5C9 2.36739 9.05268 2.24021 9.14645 2.14645C9.24021 2.05268 9.36739 2 9.5 2H13.5C13.6326 2 13.7598 2.05268 13.8536 2.14645C13.9473 2.24021 14 2.36739 14 2.5V6.5ZM11.5 8C11.3674 8 11.2402 8.05268 11.1464 8.14645C11.0527 8.24021 11 8.36739 11 8.5V13H3V5H7.5C7.63261 5 7.75979 4.94732 7.85355 4.85355C7.94732 4.75979 8 4.63261 8 4.5C8 4.36739 7.94732 4.24021 7.85355 4.14645C7.75979 4.05268 7.63261 4 7.5 4H3C2.73478 4 2.48043 4.10536 2.29289 4.29289C2.10536 4.48043 2 4.73478 2 5V13C2 13.2652 2.10536 13.5196 2.29289 13.7071C2.48043 13.8946 2.73478 14 3 14H11C11.2652 14 11.5196 13.8946 11.7071 13.7071C11.8946 13.5196 12 13.2652 12 13V8.5C12 8.36739 11.9473 8.24021 11.8536 8.14645C11.7598 8.05268 11.6326 8 11.5 8Z" fill="#95988B"/> </svg></a></td></tr></table></div>
<h3>Customer Sentiment Highlights</h3><ul><li>“This model delivers state-of-the-art results at an impressive eight times lower cost than many leading competitors... What truly distinguishes Mistral Medium 3 is its ability to deliver more than 90% of the performance of top-tier models like Claude 3.7 Sonnet, but at a fraction of the price.”<b> <span class="pricingHiphenSymb"> - </span>Paul O&#039;Brien, LinkedIn</b></li><li>“I&#039;ve been using Mistral Medium 3 last couple of days, and I&#039;m honestly surprised at how good it is. Highly recommend giving it a try if you haven&#039;t, especially if you are trying to reduce costs. I&#039;ve basically switched from Claude to Mistral and honestly prefer it even if costs were equal.”<b> <span class="pricingHiphenSymb"> - </span>Hacker News user</b></li><li>“It&#039;s been insanely fast, cheap, reliable, and follows formatting instructions to the letter. I was (and still am) super super impressed.”<b> <span class="pricingHiphenSymb"> - </span>barrell, Hacker News</b></li><li>“I use mistral-small with batch API and it&#039;s probably the best cost-efficient option out there.”<b> <span class="pricingHiphenSymb"> - </span>druskacik, Hacker News</b></li><li>“The new Mistral Small 3 API model is $0.10/$0.30. For comparison, GPT-4o-mini is $0.15/$0.60.”<b> <span class="pricingHiphenSymb"> - </span>simonw, Hacker News</b></li></ul>
Metronome’s Take
<p>Mistral AI operates a consumption-based API pricing model that charges customers on token usage, with no minimum commitments or baseline subscription fees. The platform segments offerings through model tiers (Small, Medium, Large), each with distinct per-token rates that allow organizations to select models based on capability requirements and cost constraints. Token pricing remains consistent across deployment methods, whether customers access models through the managed cloud API, multi-cloud platforms, or on-premises installations.</p>
<p><strong>Recommendation:</strong> This infrastructure-style, usage-based pricing model can align well with developer-focused AI platforms. The combination of automatic scaling, model-tier differentiation, and asymmetric token pricing follows established patterns in the LLM API market. Developers building production applications can benefit from predictable unit economics and low operational overhead, while organizations seeking long-term budget certainty may need to account for ongoing price evolution as models and pricing continue to mature.</p>
<h4>Key Insights</h4><ul><li> <strong>Tiered model pricing aligned to capability levels:</strong> Mistral structures pricing across three primary model tiers with distinct per-token rates, allowing customers to optimize for either performance or cost efficiency based on use case complexity. <p><strong>Benefit:</strong> Applications can be right-sized to match task requirements, using lower-cost models for simpler tasks and reserving premium models for complex reasoning, resulting in predictable cost control.</p></li><li> <strong>Asymmetric input/output pricing reflecting compute economics:</strong> Mistral prices output tokens materially higher than input tokens across its API models, reflecting the greater computational cost of generation relative to prompt ingestion. <p><strong>Benefit:</strong> Applications with large context windows but limited generation, such as document analysis or retrieval-augmented workflows, incur lower relative costs than chat-heavy or generative use cases.</p></li><li> <strong>Free tier with usage-based graduation:</strong> The Experiment Plan provides free access to all models for testing and evaluation, with automatic progression to paid tiers based on cumulative spend milestones rather than hard usage caps or time limits. <p><strong>Benefit:</strong> Applications can be thoroughly evaluated on real workloads before committing to paid usage, reducing adoption friction while creating a natural path to paid conversion as usage scales.</p></li></ul>

The Pricing
Experimentation
Playbook

Find your ideal pricing model

Answer 8 quick questions to discover which best fits how your customers get value from your product.

Find your model