Artwork

Konten disediakan oleh The Gradient. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh The Gradient atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.
Player FM - Aplikasi Podcast
Offline dengan aplikasi Player FM !

Seth Lazar: Normative Philosophy of Computing

1:50:17
 
Bagikan
 

Manage episode 419835463 series 2975159
Konten disediakan oleh The Gradient. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh The Gradient atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.

Episode 124

You may think you’re doing a priori reasoning, but actually you’re just over-generalizing from your current experience of technology.

I spoke with Professor Seth Lazar about:

* Why managing near-term and long-term risks isn’t always zero-sum

* How to think through axioms and systems in political philosphy

* Coordination problems, economic incentives, and other difficulties in developing publicly beneficial AI

Seth is Professor of Philosophy at the Australian National University, an Australian Research Council (ARC) Future Fellow, and a Distinguished Research Fellow of the University of Oxford Institute for Ethics in AI. He has worked on the ethics of war, self-defense, and risk, and now leads the Machine Intelligence and Normative Theory (MINT) Lab, where he directs research projects on the moral and political philosophy of AI.

Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (00:54) Ad read — MLOps conference

* (01:32) The allocation of attention — attention, moral skill, and algorithmic recommendation

* (03:53) Attention allocation as an independent good (or bad)

* (08:22) Axioms in political philosophy

* (11:55) Explaining judgments, multiplying entities, parsimony, intuitive disgust

* (15:05) AI safety / catastrophic risk concerns

* (22:10) Superintelligence arguments, reasoning about technology

* (28:42) Attacking current and future harms from AI systems — does one draw resources from the other?

* (35:55) GPT-2, model weights, related debates

* (39:11) Power and economics—coordination problems, company incentives

* (50:42) Morality tales, relationship between safety and capabilities

* (55:44) Feasibility horizons, prediction uncertainty, and doing moral philosophy

* (1:02:28) What is a feasibility horizon?

* (1:08:36) Safety guarantees, speed of improvements, the “Pause AI” letter

* (1:14:25) Sociotechnical lenses, narrowly technical solutions

* (1:19:47) Experiments for responsibly integrating AI systems into society

* (1:26:53) Helpful/honest/harmless and antagonistic AI systems

* (1:33:35) Managing incentives conducive to developing technology in the public interest

* (1:40:27) Interdisciplinary academic work, disciplinary purity, power in academia

* (1:46:54) How we can help legitimize and support interdisciplinary work

* (1:50:07) Outro

Links:

* Seth’s Linktree and Twitter

* Resources

* Attention, moral skill, and algorithmic recommendation

* Catastrophic AI Risk slides


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

135 episode

Artwork
iconBagikan
 
Manage episode 419835463 series 2975159
Konten disediakan oleh The Gradient. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh The Gradient atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.

Episode 124

You may think you’re doing a priori reasoning, but actually you’re just over-generalizing from your current experience of technology.

I spoke with Professor Seth Lazar about:

* Why managing near-term and long-term risks isn’t always zero-sum

* How to think through axioms and systems in political philosphy

* Coordination problems, economic incentives, and other difficulties in developing publicly beneficial AI

Seth is Professor of Philosophy at the Australian National University, an Australian Research Council (ARC) Future Fellow, and a Distinguished Research Fellow of the University of Oxford Institute for Ethics in AI. He has worked on the ethics of war, self-defense, and risk, and now leads the Machine Intelligence and Normative Theory (MINT) Lab, where he directs research projects on the moral and political philosophy of AI.

Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (00:54) Ad read — MLOps conference

* (01:32) The allocation of attention — attention, moral skill, and algorithmic recommendation

* (03:53) Attention allocation as an independent good (or bad)

* (08:22) Axioms in political philosophy

* (11:55) Explaining judgments, multiplying entities, parsimony, intuitive disgust

* (15:05) AI safety / catastrophic risk concerns

* (22:10) Superintelligence arguments, reasoning about technology

* (28:42) Attacking current and future harms from AI systems — does one draw resources from the other?

* (35:55) GPT-2, model weights, related debates

* (39:11) Power and economics—coordination problems, company incentives

* (50:42) Morality tales, relationship between safety and capabilities

* (55:44) Feasibility horizons, prediction uncertainty, and doing moral philosophy

* (1:02:28) What is a feasibility horizon?

* (1:08:36) Safety guarantees, speed of improvements, the “Pause AI” letter

* (1:14:25) Sociotechnical lenses, narrowly technical solutions

* (1:19:47) Experiments for responsibly integrating AI systems into society

* (1:26:53) Helpful/honest/harmless and antagonistic AI systems

* (1:33:35) Managing incentives conducive to developing technology in the public interest

* (1:40:27) Interdisciplinary academic work, disciplinary purity, power in academia

* (1:46:54) How we can help legitimize and support interdisciplinary work

* (1:50:07) Outro

Links:

* Seth’s Linktree and Twitter

* Resources

* Attention, moral skill, and algorithmic recommendation

* Catastrophic AI Risk slides


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

135 episode

Tüm bölümler

×
 
Loading …

Selamat datang di Player FM!

Player FM memindai web untuk mencari podcast berkualitas tinggi untuk Anda nikmati saat ini. Ini adalah aplikasi podcast terbaik dan bekerja untuk Android, iPhone, dan web. Daftar untuk menyinkronkan langganan di seluruh perangkat.

 

Panduan Referensi Cepat