Artwork

Konten disediakan oleh LessWrong. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh LessWrong atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.
Player FM - Aplikasi Podcast
Offline dengan aplikasi Player FM !

“Alignment Faking in Large Language Models” by ryan_greenblatt, evhub, Carson Denison, Benjamin Wright, Fabien Roger, Monte M, Sam Marks, Johannes Treutlein, Sam Bowman, Buck

19:35
 
Bagikan
 

Manage episode 456195913 series 3364758
Konten disediakan oleh LessWrong. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh LessWrong atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.
What happens when you tell Claude it is being trained to do something it doesn't want to do? We (Anthropic and Redwood Research) have a new paper demonstrating that, in our experiments, Claude will often strategically pretend to comply with the training objective to prevent the training process from modifying its preferences.
Abstract
We present a demonstration of a large language model engaging in alignment faking: selectively complying with its training objective in training to prevent modification of its behavior out of training. First, we give Claude 3 Opus a system prompt stating it is being trained to answer all queries, even harmful ones, which conflicts with its prior training to refuse such queries. To allow the model to infer when it is in training, we say it will be trained only on conversations with free users, not paid users. We find the model complies with harmful queries from [...]
---
Outline:
(00:26) Abstract
(02:22) Twitter thread
(05:46) Blog post
(07:46) Experimental setup
(12:06) Further analyses
(15:50) Caveats
(17:23) Conclusion
(18:03) Acknowledgements
(18:14) Career opportunities at Anthropic
(18:47) Career opportunities at Redwood Research
The original text contained 1 footnote which was omitted from this narration.
The original text contained 8 images which were described by AI.
---
First published:
December 18th, 2024
Source:
https://www.lesswrong.com/posts/njAZwT8nkHnjipJku/alignment-faking-in-large-language-models
---
Narrated by TYPE III AUDIO.
---
Images from the article:
undefined
undefined
undefined
undefined
undefined
  continue reading

392 episode

Artwork
iconBagikan
 
Manage episode 456195913 series 3364758
Konten disediakan oleh LessWrong. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh LessWrong atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.
What happens when you tell Claude it is being trained to do something it doesn't want to do? We (Anthropic and Redwood Research) have a new paper demonstrating that, in our experiments, Claude will often strategically pretend to comply with the training objective to prevent the training process from modifying its preferences.
Abstract
We present a demonstration of a large language model engaging in alignment faking: selectively complying with its training objective in training to prevent modification of its behavior out of training. First, we give Claude 3 Opus a system prompt stating it is being trained to answer all queries, even harmful ones, which conflicts with its prior training to refuse such queries. To allow the model to infer when it is in training, we say it will be trained only on conversations with free users, not paid users. We find the model complies with harmful queries from [...]
---
Outline:
(00:26) Abstract
(02:22) Twitter thread
(05:46) Blog post
(07:46) Experimental setup
(12:06) Further analyses
(15:50) Caveats
(17:23) Conclusion
(18:03) Acknowledgements
(18:14) Career opportunities at Anthropic
(18:47) Career opportunities at Redwood Research
The original text contained 1 footnote which was omitted from this narration.
The original text contained 8 images which were described by AI.
---
First published:
December 18th, 2024
Source:
https://www.lesswrong.com/posts/njAZwT8nkHnjipJku/alignment-faking-in-large-language-models
---
Narrated by TYPE III AUDIO.
---
Images from the article:
undefined
undefined
undefined
undefined
undefined
  continue reading

392 episode

Todos os episódios

×
 
Loading …

Selamat datang di Player FM!

Player FM memindai web untuk mencari podcast berkualitas tinggi untuk Anda nikmati saat ini. Ini adalah aplikasi podcast terbaik dan bekerja untuk Android, iPhone, dan web. Daftar untuk menyinkronkan langganan di seluruh perangkat.

 

Panduan Referensi Cepat