Mistral logo

Mistral

Mixtral 8x22B Instruct

Mistral's official instruct fine-tuned version of [Mixtral 8x22B](/models/mistralai/mixtral-8x22b). It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Its strengths include: - strong math, coding, and reasoning - large context length (64k) - fluency in English, French, Italian, German, and Spanish See benchmarks on the launch announcement [here](https://mistral.ai/news/mixtral-8x22b/). #moe

Input / 1M tokens
$2.00
Output / 1M tokens
$6.00
Context window
66K tokens
Provider
Mistral
Cached input / 1M
$0.200
Knowledge cutoff
2024-01-31

Performance

Median streaming throughput and first-token latency measured by Artificial Analysis.

Output tokens / sec
Time to first token