Inception
Mercury 2
Mercury 2 is an extremely fast reasoning LLM, and the first reasoning diffusion LLM (dLLM). Instead of generating tokens sequentially, Mercury 2 produces and refines multiple tokens in parallel, achieving >1,000 tokens/sec on standard GPUs. Mercury 2 is 5x+ faster than leading speed-optimized LLMs like Claude 4.5 Haiku and GPT 5 Mini, at a fraction of the cost. Mercury 2 supports tunable reasoning levels, 128K context, native tool use, and schema-aligned JSON output. Built for coding workflows where latency compounds, real-time voice/search, and agent loops. OpenAI API compatible. Read more in the [blog post](https://www.inceptionlabs.ai/blog/introducing-mercury-2).
- Input / 1M tokens
- $0.250
- Output / 1M tokens
- $0.750
- Context window
- 128K tokens
- Provider
- Inception
- Cached input / 1M
- $0.025
Performance
Median streaming throughput and first-token latency measured by Artificial Analysis.
- Output tokens / sec
- 872 t/s
- Time to first token
- 4.10s
Benchmarks
Intelligence, coding, and math indexes plus the underlying evaluation scores.
- Intelligence Index
- 33
- Coding Index
- 31
- Math Index
- —
- MMLU-Pro
- —
- GPQA
- 77.0%
- HLE
- 15.5%
- LiveCodeBench
- —
- SciCode
- 38.7%
- MATH-500
- —
- AIME
- —
Benchmarks via Artificial Analysis