04 / 05
Chapter Four · The New Moore's Law
Four times faster
— and stacked.
Moore's Law doubled transistors every twenty-four months. It carried half a century of progress. The AI equivalent doubles training compute every five, capability density every three-and-a-half, and inference cost halves every two-and-a-half. These trends compound.
A query that cost a dollar eighteen months ago costs a cent today — and the model answering it is three times as capable. Frontier training compute is growing at 4–5× per year, per Epoch AI's primary dataset. The Densing Law paper, peer-reviewed in Nature Machine Intelligence, finds that capability density doubles every 3.5 months and inference cost halves every 2.6.
Doubling time in months · log scale · lower is faster
Inference cost (halving)
2.6mo
Task horizon (post-2024)
2.9mo
Capability density
3.5mo
Altman cost-per-query
3.6mo
Frontier training compute
5.2mo
Task horizon (6-year trend)
7mo
Algorithmic efficiency
8mo
Moore's Law (transistors)
24mo
AI chip $/FLOP
26mo
Cost / pricing
Capability / task horizon
Compute / efficiency
Reference (Moore's Law)
GPT-4-class intelligence · Input cost per million tokens
The collapse in the cost of intelligence.
Sixteen months
200×
Mar 2023
GPT-4 8k
$30
/ M tokens
Nov 2023
GPT-4 Turbo
$10
/ M tokens
May 2024
GPT-4o
$5
/ M tokens
Aug 2024
GPT-4o (revised)
$2.5
/ M tokens
Jul 2024
GPT-4o mini
$0.15
/ M tokens
§4.1The numbers that land.
200×
Drop in input-token price for GPT-4-class intelligence between March 2023 and July 2024. Anthropic and Google curves mirror this.
Source · OpenAI pricing history · platform.openai.com/docs/pricing
3.5mo
Doubling time for LLM capability density, per the peer-reviewed Densing Law paper in Nature Machine Intelligence.
Source · Xiao et al., Nature MI, Nov 2025
4–9×
How much faster the new AI doubling trends compound relative to Moore's Law, depending on which trend you measure.
Source · Epoch AI · METR · Densing Law
Primary sources: Xiao, Cai, Zhao et al., “Densing Law of LLMs,” Nature Machine Intelligence (Nov 2025), DOI 10.1038/s42256-025-01137-0. Sevilla et al., “Compute Trends Across Three Eras of Machine Learning,” arXiv:2202.05924. Ho et al., “Algorithmic Progress in Language Models,” arXiv:2403.05812 (Epoch AI). METR Time Horizon 1.1 (Jan 29 2026). Sam Altman, “Three Observations” (Feb 9 2025). OpenAI, Anthropic, and Google published API pricing histories.