Artwork

Konten disediakan oleh Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.
Player FM - Aplikasi Podcast
Offline dengan aplikasi Player FM !

Using LLMs to Evaluate Code

1:02:10
 
Bagikan
 

Manage episode 509954461 series 1264075
Konten disediakan oleh Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.

Finding and fixing weaknesses and vulnerabilities in source code has been an ongoing challenge. There is a lot of excitement about the ability of large language models (LLMs, e.g., GenAI) to produce and evaluate programs. One question related to this ability is: Do these systems help in practice? We ran experiments with various LLMs to see if they could correctly identify problems with source code or determine that there were no problems. This webcast will provide background on our methods and a summary of our results.

What Will Attendees Learn?

• how well LLMs can evaluate source code

• evolution of capability as new LLMs are released

• how to address potential gaps in capability

  continue reading

174 episode

Artwork
iconBagikan
 
Manage episode 509954461 series 1264075
Konten disediakan oleh Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.

Finding and fixing weaknesses and vulnerabilities in source code has been an ongoing challenge. There is a lot of excitement about the ability of large language models (LLMs, e.g., GenAI) to produce and evaluate programs. One question related to this ability is: Do these systems help in practice? We ran experiments with various LLMs to see if they could correctly identify problems with source code or determine that there were no problems. This webcast will provide background on our methods and a summary of our results.

What Will Attendees Learn?

• how well LLMs can evaluate source code

• evolution of capability as new LLMs are released

• how to address potential gaps in capability

  continue reading

174 episode

Semua episode

×
 
Loading …

Selamat datang di Player FM!

Player FM memindai web untuk mencari podcast berkualitas tinggi untuk Anda nikmati saat ini. Ini adalah aplikasi podcast terbaik dan bekerja untuk Android, iPhone, dan web. Daftar untuk menyinkronkan langganan di seluruh perangkat.

 

Panduan Referensi Cepat

Dengarkan acara ini sambil menjelajah
Putar