Artwork

Konten disediakan oleh Aaron Bergman. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh Aaron Bergman atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.
Player FM - Aplikasi Podcast
Offline dengan aplikasi Player FM !

Is AI an existential risk? A discussion | Ryan Kidd, James Fodor | EAGxAustralia 2023

56:07
 
Bagikan
 

Manage episode 424733775 series 3503936
Konten disediakan oleh Aaron Bergman. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh Aaron Bergman atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.

Ryan is Co-Director of the ML Alignment Theory Scholars Program, a Board Member and Co-Founder of the London Initiative for Safe AI, and a Manifund Regrantor. Previously, he completed a PhD in Physics at the University of Queensland and ran UQ’s Effective Altruism student group for ~3 years. Ryan’s ethics are largely preference utilitarian and cosmopolitan; he is deeply concerned about near-term x-risk and safeguarding the long-term future. James Fodor is a PhD student in the Decision, Risk and Financial Sciences Program. He completed graduate studies in physics and economics at the University of Melbourne, and a masters degree in neuroscience at the Australian National University. He has also worked as a research assistant in structural biology at Monash University. Outside of research, James has a keen interest in science, philosophy, and critical thinking. He is passionate about Effective Altruism, including causes such as global poverty and animal welfare.

  continue reading

159 episode

Artwork
iconBagikan
 
Manage episode 424733775 series 3503936
Konten disediakan oleh Aaron Bergman. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh Aaron Bergman atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.

Ryan is Co-Director of the ML Alignment Theory Scholars Program, a Board Member and Co-Founder of the London Initiative for Safe AI, and a Manifund Regrantor. Previously, he completed a PhD in Physics at the University of Queensland and ran UQ’s Effective Altruism student group for ~3 years. Ryan’s ethics are largely preference utilitarian and cosmopolitan; he is deeply concerned about near-term x-risk and safeguarding the long-term future. James Fodor is a PhD student in the Decision, Risk and Financial Sciences Program. He completed graduate studies in physics and economics at the University of Melbourne, and a masters degree in neuroscience at the Australian National University. He has also worked as a research assistant in structural biology at Monash University. Outside of research, James has a keen interest in science, philosophy, and critical thinking. He is passionate about Effective Altruism, including causes such as global poverty and animal welfare.

  continue reading

159 episode

Semua episode

×
 
Loading …

Selamat datang di Player FM!

Player FM memindai web untuk mencari podcast berkualitas tinggi untuk Anda nikmati saat ini. Ini adalah aplikasi podcast terbaik dan bekerja untuk Android, iPhone, dan web. Daftar untuk menyinkronkan langganan di seluruh perangkat.

 

Panduan Referensi Cepat