Mistral
Mixtral 8x7B Instruct
Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parameters. Instruct model fine-tuned by Mistral. #moe
- Input / 1M tokens
- $0.540
- Output / 1M tokens
- $0.540
- Context window
- 33K tokens
- Provider
- Mistral
- Knowledge cutoff
- 2023-12-31
Performance
Median streaming throughput and first-token latency measured by Artificial Analysis.
- Output tokens / sec
- 0 t/s
- Time to first token
- 0.00s
Benchmarks
Intelligence, coding, and math indexes plus the underlying evaluation scores.
- Intelligence Index
- 8
- Coding Index
- —
- Math Index
- —
- MMLU-Pro
- 38.7%
- GPQA
- 29.2%
- HLE
- 4.5%
- LiveCodeBench
- 6.6%
- SciCode
- 2.8%
- MATH-500
- 29.9%
- AIME
- 0.0%
Benchmarks via Artificial Analysis