Artwork

Konten disediakan oleh LessWrong. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh LessWrong atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.
Player FM - Aplikasi Podcast
Offline dengan aplikasi Player FM !

“o1: A Technical Primer” by Jesse Hoogland

18:45
 
Bagikan
 

Manage episode 454951406 series 3364758
Konten disediakan oleh LessWrong. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh LessWrong atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.
TL;DR: In September 2024, OpenAI released o1, its first "reasoning model". This model exhibits remarkable test-time scaling laws, which complete a missing piece of the Bitter Lesson and open up a new axis for scaling compute. Following Rush and Ritter (2024) and Brown (2024a, 2024b), I explore four hypotheses for how o1 works and discuss some implications for future scaling and recursive self-improvement.
The Bitter Lesson(s)
The Bitter Lesson is that "general methods that leverage computation are ultimately the most effective, and by a large margin." After a decade of scaling pretraining, it's easy to forget this lesson is not just about learning; it's also about search.
OpenAI didn't forget. Their new "reasoning model" o1 has figured out how to scale search during inference time. This does not use explicit search algorithms. Instead, o1 is trained via RL to get better at implicit search via chain of thought [...]
---
Outline:
(00:40) The Bitter Lesson(s)
(01:56) What we know about o1
(02:09) What OpenAI has told us
(03:26) What OpenAI has showed us
(04:29) Proto-o1: Chain of Thought
(04:41) In-Context Learning
(05:14) Thinking Step-by-Step
(06:02) Majority Vote
(06:47) o1: Four Hypotheses
(08:57) 1. Filter: Guess + Check
(09:50) 2. Evaluation: Process Rewards
(11:29) 3. Guidance: Search / AlphaZero
(13:00) 4. Combination: Learning to Correct
(14:23) Post-o1: (Recursive) Self-Improvement
(16:43) Outlook
---
First published:
December 9th, 2024
Source:
https://www.lesswrong.com/posts/byNYzsfFmb2TpYFPW/o1-a-technical-primer
---
Narrated by TYPE III AUDIO.
---
Images from the article:
undefined
undefinedApple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
  continue reading

385 episode

Artwork
iconBagikan
 
Manage episode 454951406 series 3364758
Konten disediakan oleh LessWrong. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh LessWrong atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.
TL;DR: In September 2024, OpenAI released o1, its first "reasoning model". This model exhibits remarkable test-time scaling laws, which complete a missing piece of the Bitter Lesson and open up a new axis for scaling compute. Following Rush and Ritter (2024) and Brown (2024a, 2024b), I explore four hypotheses for how o1 works and discuss some implications for future scaling and recursive self-improvement.
The Bitter Lesson(s)
The Bitter Lesson is that "general methods that leverage computation are ultimately the most effective, and by a large margin." After a decade of scaling pretraining, it's easy to forget this lesson is not just about learning; it's also about search.
OpenAI didn't forget. Their new "reasoning model" o1 has figured out how to scale search during inference time. This does not use explicit search algorithms. Instead, o1 is trained via RL to get better at implicit search via chain of thought [...]
---
Outline:
(00:40) The Bitter Lesson(s)
(01:56) What we know about o1
(02:09) What OpenAI has told us
(03:26) What OpenAI has showed us
(04:29) Proto-o1: Chain of Thought
(04:41) In-Context Learning
(05:14) Thinking Step-by-Step
(06:02) Majority Vote
(06:47) o1: Four Hypotheses
(08:57) 1. Filter: Guess + Check
(09:50) 2. Evaluation: Process Rewards
(11:29) 3. Guidance: Search / AlphaZero
(13:00) 4. Combination: Learning to Correct
(14:23) Post-o1: (Recursive) Self-Improvement
(16:43) Outlook
---
First published:
December 9th, 2024
Source:
https://www.lesswrong.com/posts/byNYzsfFmb2TpYFPW/o1-a-technical-primer
---
Narrated by TYPE III AUDIO.
---
Images from the article:
undefined
undefinedApple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
  continue reading

385 episode

All episodes

×
 
Loading …

Selamat datang di Player FM!

Player FM memindai web untuk mencari podcast berkualitas tinggi untuk Anda nikmati saat ini. Ini adalah aplikasi podcast terbaik dan bekerja untuk Android, iPhone, dan web. Daftar untuk menyinkronkan langganan di seluruh perangkat.

 

Panduan Referensi Cepat