Xiaomi
MiMo-V2-Flash
MiMo-V2-Flash is an open-source foundation language model developed by Xiaomi. It is a Mixture-of-Experts model with 309B total parameters and 15B active parameters, adopting hybrid attention architecture. MiMo-V2-Flash supports a hybrid-thinking toggle and a 256K context window, and excels at reasoning, coding, and agent scenarios. On SWE-bench Verified and SWE-bench Multilingual, MiMo-V2-Flash ranks as the top #1 open-source model globally, delivering performance comparable to Claude Sonnet 4.5 while costing only about 3.5% as much. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config).
- Input / 1M tokens
- $0.090
- Output / 1M tokens
- $0.290
- Context window
- 262K tokens
- Provider
- Xiaomi
- Cached input / 1M
- $0.045
Performance
Median streaming throughput and first-token latency measured by Artificial Analysis.
- Output tokens / sec
- 126 t/s
- Time to first token
- 1.47s
Benchmarks
Intelligence, coding, and math indexes plus the underlying evaluation scores.
- Intelligence Index
- 30
- Coding Index
- 26
- Math Index
- 68
- MMLU-Pro
- 74.4%
- GPQA
- 65.6%
- HLE
- 8.0%
- LiveCodeBench
- 40.2%
- SciCode
- 25.9%
- MATH-500
- —
- AIME
- —
Benchmarks via Artificial Analysis