Artwork

Konten disediakan oleh Gus Docker and Future of Life Institute. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh Gus Docker and Future of Life Institute atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.
Player FM - Aplikasi Podcast
Offline dengan aplikasi Player FM !

Dan Hendrycks on Catastrophic AI Risks

2:07:24
 
Bagikan
 

Manage episode 382010909 series 1334308
Konten disediakan oleh Gus Docker and Future of Life Institute. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh Gus Docker and Future of Life Institute atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.
Dan Hendrycks joins the podcast again to discuss X.ai, how AI risk thinking has evolved, malicious use of AI, AI race dynamics between companies and between militaries, making AI organizations safer, and how representation engineering could help us understand AI traits like deception. You can learn more about Dan's work at https://www.safe.ai Timestamps: 00:00 X.ai - Elon Musk's new AI venture 02:41 How AI risk thinking has evolved 12:58 AI bioengeneering 19:16 AI agents 24:55 Preventing autocracy 34:11 AI race - corporations and militaries 48:04 Bulletproofing AI organizations 1:07:51 Open-source models 1:15:35 Dan's textbook on AI safety 1:22:58 Rogue AI 1:28:09 LLMs and value specification 1:33:14 AI goal drift 1:41:10 Power-seeking AI 1:52:07 AI deception 1:57:53 Representation engineering
  continue reading

205 episode

Artwork
iconBagikan
 
Manage episode 382010909 series 1334308
Konten disediakan oleh Gus Docker and Future of Life Institute. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh Gus Docker and Future of Life Institute atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.
Dan Hendrycks joins the podcast again to discuss X.ai, how AI risk thinking has evolved, malicious use of AI, AI race dynamics between companies and between militaries, making AI organizations safer, and how representation engineering could help us understand AI traits like deception. You can learn more about Dan's work at https://www.safe.ai Timestamps: 00:00 X.ai - Elon Musk's new AI venture 02:41 How AI risk thinking has evolved 12:58 AI bioengeneering 19:16 AI agents 24:55 Preventing autocracy 34:11 AI race - corporations and militaries 48:04 Bulletproofing AI organizations 1:07:51 Open-source models 1:15:35 Dan's textbook on AI safety 1:22:58 Rogue AI 1:28:09 LLMs and value specification 1:33:14 AI goal drift 1:41:10 Power-seeking AI 1:52:07 AI deception 1:57:53 Representation engineering
  continue reading

205 episode

Semua episode

×
 
Loading …

Selamat datang di Player FM!

Player FM memindai web untuk mencari podcast berkualitas tinggi untuk Anda nikmati saat ini. Ini adalah aplikasi podcast terbaik dan bekerja untuk Android, iPhone, dan web. Daftar untuk menyinkronkan langganan di seluruh perangkat.

 

Panduan Referensi Cepat