Jared Kaplan on June 16th, 2025 at AI Startup School in San Francisco.
Jared Kaplan started out as a theoretical physicist chasing questions about the universe. Then he helped uncover one of AI’s most surprising truths: that intelligence scales in a predictable, almost physical way.
That insight became foundational to the modern era of large language models—and led him to co-found Anthropic.
In this talk, he walks through how that discovery reshaped the path to human-level AI, what it means for future models like Claude, and why even the dumbest questions can lead to the biggest breakthroughs. He reflects on memory, oversight, and what’s left to solve as models grow smarter—and longer-horizon tasks come within reach.
Chapters:
00:17 – From Physics to AI
01:41 – Initial Skepticism and Shift to AI
02:12 – AI Training Phases
02:32 – Pre-Training
03:16 – Reinforcement Learning
04:19 – Scaling Laws in Training
08:19 – Unlocking AI Capabilities
11:27 – Organizational Knowledge and Memory
12:19 – Oversight and Nuanced Tasks
13:38 – Preparing for the Future
15:48 – Claude 4 and Beyond
21:18 – Human-AI Collaboration
29:50 – Scaling Laws and Compute Efficiency
35:26 – Audience Q&A