Artwork

Konten disediakan oleh Brian Carter. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh Brian Carter atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.
Player FM - Aplikasi Podcast
Offline dengan aplikasi Player FM !

LLM DIFF Transformer with SoftMax Subtraction

12:48
 
Bagikan
 

Manage episode 444738222 series 3605861
Konten disediakan oleh Brian Carter. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh Brian Carter atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.

This paper presents a new architecture for large language models called DIFF Transformer. The paper argues that conventional Transformers over-allocate attention to irrelevant parts of the input, drowning out the signal needed for accurate output. DIFF Transformer tackles this issue by using a differential attention mechanism that subtracts two softmax attention maps, effectively canceling out noise and amplifying attention to relevant content. The paper presents extensive experiments demonstrating that DIFF Transformer outperforms conventional Transformers in various tasks, including language modeling, key information retrieval, hallucination mitigation, and in-context learning. This results in a more efficient model that requires fewer parameters and training data to achieve the same performance as a Transformer.

Read more: https://arxiv.org/pdf/2410.05258

  continue reading

71 episode

Artwork
iconBagikan
 
Manage episode 444738222 series 3605861
Konten disediakan oleh Brian Carter. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh Brian Carter atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.

This paper presents a new architecture for large language models called DIFF Transformer. The paper argues that conventional Transformers over-allocate attention to irrelevant parts of the input, drowning out the signal needed for accurate output. DIFF Transformer tackles this issue by using a differential attention mechanism that subtracts two softmax attention maps, effectively canceling out noise and amplifying attention to relevant content. The paper presents extensive experiments demonstrating that DIFF Transformer outperforms conventional Transformers in various tasks, including language modeling, key information retrieval, hallucination mitigation, and in-context learning. This results in a more efficient model that requires fewer parameters and training data to achieve the same performance as a Transformer.

Read more: https://arxiv.org/pdf/2410.05258

  continue reading

71 episode

Semua episode

×
 
Loading …

Selamat datang di Player FM!

Player FM memindai web untuk mencari podcast berkualitas tinggi untuk Anda nikmati saat ini. Ini adalah aplikasi podcast terbaik dan bekerja untuk Android, iPhone, dan web. Daftar untuk menyinkronkan langganan di seluruh perangkat.

 

Panduan Referensi Cepat

Dengarkan acara ini sambil menjelajah
Putar