Artwork

Konten disediakan oleh Future of Life Institute. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh Future of Life Institute atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.
Player FM - Aplikasi Podcast
Offline dengan aplikasi Player FM !

Connor Leahy on AI Safety and Why the World is Fragile

1:05:05
 
Bagikan
 

Manage episode 353568908 series 1334308
Konten disediakan oleh Future of Life Institute. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh Future of Life Institute atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.
Connor Leahy from Conjecture joins the podcast to discuss AI safety, the fragility of the world, slowing down AI development, regulating AI, and the optimal funding model for AI safety research. Learn more about Connor's work at https://conjecture.dev Timestamps: 00:00 Introduction 00:47 What is the best way to understand AI safety? 09:50 Why is the world relatively stable? 15:18 Is the main worry human misuse of AI? 22:47 Can humanity solve AI safety? 30:06 Can we slow down AI development? 37:13 How should governments regulate AI? 41:09 How do we avoid misallocating AI safety government grants? 51:02 Should AI safety research be done by for-profit companies? Social Media Links: ➡️ WEBSITE: https://futureoflife.org ➡️ TWITTER: https://twitter.com/FLIxrisk ➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ➡️ META: https://www.facebook.com/futureoflifeinstitute ➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
  continue reading

205 episode

Artwork
iconBagikan
 
Manage episode 353568908 series 1334308
Konten disediakan oleh Future of Life Institute. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh Future of Life Institute atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.
Connor Leahy from Conjecture joins the podcast to discuss AI safety, the fragility of the world, slowing down AI development, regulating AI, and the optimal funding model for AI safety research. Learn more about Connor's work at https://conjecture.dev Timestamps: 00:00 Introduction 00:47 What is the best way to understand AI safety? 09:50 Why is the world relatively stable? 15:18 Is the main worry human misuse of AI? 22:47 Can humanity solve AI safety? 30:06 Can we slow down AI development? 37:13 How should governments regulate AI? 41:09 How do we avoid misallocating AI safety government grants? 51:02 Should AI safety research be done by for-profit companies? Social Media Links: ➡️ WEBSITE: https://futureoflife.org ➡️ TWITTER: https://twitter.com/FLIxrisk ➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ➡️ META: https://www.facebook.com/futureoflifeinstitute ➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
  continue reading

205 episode

Semua episode

×
 
Loading …

Selamat datang di Player FM!

Player FM memindai web untuk mencari podcast berkualitas tinggi untuk Anda nikmati saat ini. Ini adalah aplikasi podcast terbaik dan bekerja untuk Android, iPhone, dan web. Daftar untuk menyinkronkan langganan di seluruh perangkat.

 

Panduan Referensi Cepat