Qwen logo

Qwen

Qwen3 8B

Qwen3-8B is a dense 8.2B parameter causal language model from the Qwen3 series, designed for both reasoning-heavy tasks and efficient dialogue. It supports seamless switching between "thinking" mode for math, coding, and logical inference, and "non-thinking" mode for general conversation. The model is fine-tuned for instruction-following, agent integration, creative writing, and multilingual use across 100+ languages and dialects. It natively supports a 32K token context window and can extend to 131K tokens with YaRN scaling.

Input / 1M tokens
$0.050
Output / 1M tokens
$0.400
Context window
41K tokens
Provider
Qwen
Cached input / 1M
$0.050
Knowledge cutoff
2025-03-31

Performance

Median streaming throughput and first-token latency measured by Artificial Analysis.

Output tokens / sec
Time to first token