In the past few years, AI labs have adopted a “more is more” approach to scaling LLMs. By introducing more parameters, data and compute, they’ve been able to predictably improve model performance. But recently, there’s been plenty of debate within the AI community as to whether or not we may have finally reached the limits of scaling laws.
In this episode of YC Decoded, President and CEO Garry Tan looks into both sides of the scaling laws debate and how a brand-new paradigm could potentially forecast the future of AI.
Apply to Y Combinator: https://yc.link/YCDecoded-apply
Work at a startup: https://yc.link/YCDecoded-jobs
Chapters (Powered by https://bit.ly/chapterme-yc) –
00:00 – Intro
01:17 – Scaling law decoded
04:10 – Data and compute.
05:33 – Chinchilla
06:00 – Larger models and scaling.
07:12 – Training.
08:40 – Compute.
09:42 – Robotics