François Chollet has spent years asking a different question than most of the AI world. Instead of scaling what already works, he’s trying to understand what intelligence actually is—and how to build it from first principles. In this episode of Lightcone, he traces that path from his early work on deep learning to the creation of the ARC prize, and the launch of ARC V3, a new benchmark designed to measure something deeper than performance: the ability to learn, adapt, and reason efficiently in entirely new environments. He explains why today’s systems may be hitting limits, what recent breakthroughs really mean, and why reaching true general intelligence may require a fundamentally different approach.
00:00 – AGI by 2030?
00:31 – Introducing Ndea: A New Path Beyond Deep Learning
01:08 – A New ML Paradigm
01:30 – Replacing neural nets with compact symbolic programs
03:04 – Why Ndea Isn’t Competing With Coding Agents
05:20 – Why Everyone Might Be Wrong About Scaling LLMs
07:22 – Why Coding Agents Suddenly Work So Well
08:50 – The Limits of LLMs in Non-Verifiable Domains
10:48 – What AGI Actually Means (And Why Most Definitions Are Wrong)
13:30 – Why Deep Learning Hits a Wall
14:00 – ARC’s Origin Story
18:20 – ARC Benchmarks Explained: From V1 to V3
22:49 – The RL Loop Powering Coding Agents Today
27:03 – ARC-AGI V3: Measuring “Agentic Intelligence”
31:14 – Inside the ARC Game Studio
35:31 – Could AGI Fit in 10,000 Lines of Code?
44:01 – Building Ndea: From Idea to Compounding Research Stack
46:46 – The Future of ARC: Benchmarks That Evolve With AI
47:21 – Why There’s Still Huge Opportunity for New AI Paradigms
53:37 – How to Build a Breakout Open Source Project – Lessons From Kera
56:39 – Advice For How To Think About AI
Apply to Y Combinator: https://www.ycombinator.com/apply
Work at a startup: https://www.ycombinator.com/jobs


