Artwork

Konten disediakan oleh LessWrong. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh LessWrong atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.
Player FM - Aplikasi Podcast
Offline dengan aplikasi Player FM !

“AGI Safety and Alignment at Google DeepMind:A Summary of Recent Work ” by Rohin Shah, Seb Farquhar, Anca Dragan

18:39
 
Bagikan
 

Manage episode 435362327 series 3364760
Konten disediakan oleh LessWrong. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh LessWrong atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.We wanted to share a recap of our recent outputs with the AF community. Below, we fill in some details about what we have been working on, what motivated us to do it, and how we thought about its importance. We hope that this will help people build off things we have done and see how their work fits with ours.
Who are we?
We’re the main team at Google DeepMind working on technical approaches to existential risk from AI systems. Since our last post, we’ve evolved into the AGI Safety & Alignment team, which we think of as AGI Alignment (with subteams like mechanistic interpretability, scalable oversight, etc.), and Frontier Safety (working on the Frontier Safety Framework, including developing and running dangerous capability evaluations). We’ve also been growing since our last post: by 39% last year [...]
---
Outline:
(00:32) Who are we?
(01:32) What have we been up to?
(02:16) Frontier Safety
(02:38) FSF
(04:05) Dangerous Capability Evaluations
(05:12) Mechanistic Interpretability
(08:54) Amplified Oversight
(09:23) Theoretical Work on Debate
(10:32) Empirical Work on Debate
(11:37) Causal Alignment
(12:47) Emerging Topics
(14:57) Highlights from Our Collaborations
(17:07) What are we planning next?
---
First published:
August 20th, 2024
Source:
https://www.lesswrong.com/posts/79BPxvSsjzBkiSyTq/agi-safety-and-alignment-at-google-deepmind-a-summary-of
---
Narrated by TYPE III AUDIO.
  continue reading

335 episode

Artwork
iconBagikan
 
Manage episode 435362327 series 3364760
Konten disediakan oleh LessWrong. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh LessWrong atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.We wanted to share a recap of our recent outputs with the AF community. Below, we fill in some details about what we have been working on, what motivated us to do it, and how we thought about its importance. We hope that this will help people build off things we have done and see how their work fits with ours.
Who are we?
We’re the main team at Google DeepMind working on technical approaches to existential risk from AI systems. Since our last post, we’ve evolved into the AGI Safety & Alignment team, which we think of as AGI Alignment (with subteams like mechanistic interpretability, scalable oversight, etc.), and Frontier Safety (working on the Frontier Safety Framework, including developing and running dangerous capability evaluations). We’ve also been growing since our last post: by 39% last year [...]
---
Outline:
(00:32) Who are we?
(01:32) What have we been up to?
(02:16) Frontier Safety
(02:38) FSF
(04:05) Dangerous Capability Evaluations
(05:12) Mechanistic Interpretability
(08:54) Amplified Oversight
(09:23) Theoretical Work on Debate
(10:32) Empirical Work on Debate
(11:37) Causal Alignment
(12:47) Emerging Topics
(14:57) Highlights from Our Collaborations
(17:07) What are we planning next?
---
First published:
August 20th, 2024
Source:
https://www.lesswrong.com/posts/79BPxvSsjzBkiSyTq/agi-safety-and-alignment-at-google-deepmind-a-summary-of
---
Narrated by TYPE III AUDIO.
  continue reading

335 episode

Minden epizód

×
 
Loading …

Selamat datang di Player FM!

Player FM memindai web untuk mencari podcast berkualitas tinggi untuk Anda nikmati saat ini. Ini adalah aplikasi podcast terbaik dan bekerja untuk Android, iPhone, dan web. Daftar untuk menyinkronkan langganan di seluruh perangkat.

 

Panduan Referensi Cepat