Baidu logo

Baidu

ERNIE 4.5 21B A3B

A sophisticated text-based Mixture-of-Experts (MoE) model featuring 21B total parameters with 3B activated per token, delivering exceptional multimodal understanding and generation through heterogeneous MoE structures and modality-isolated routing. Supporting an extensive 131K token context length, the model achieves efficient inference via multi-expert parallel collaboration and quantization, while advanced post-training techniques including SFT, DPO, and UPO ensure optimized performance across diverse applications with specialized routing and balancing losses for superior task handling.

Input / 1M tokens
$0.070
Output / 1M tokens
$0.280
Context window
120K tokens
Provider
Baidu
Knowledge cutoff
2025-03-31

Performance

Median streaming throughput and first-token latency measured by Artificial Analysis.

Output tokens / sec
Time to first token