Artwork

Konten disediakan oleh Reimagining Cyber. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh Reimagining Cyber atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.
Player FM - Aplikasi Podcast
Offline dengan aplikasi Player FM !

AI and ChatGPT - Security, Privacy and Ethical Ramifications - Ep 62

27:13
 
Bagikan
 

Manage episode 359870152 series 3361845
Konten disediakan oleh Reimagining Cyber. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh Reimagining Cyber atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.

This episode features “the expert in ChatGPT”, Stephan Jou. He is CTO of Security Analytics at OpenText Cybersecurity.
“The techniques that we are developing are becoming so sophisticated and scalable that it's really become the only viable method to detect increasingly sophisticated and subtle attacks when the data volumes and velocity are so huge. So think about nation state attacks where you have very advanced adversaries that are using uncommon tools that won't be on any sort of blacklist.”
“In the past five years or so, I've become increasingly interested in the ethical and responsible application of AI. Pure AI is kind of like pure math. It's neutral. It doesn't have an angle to it, but applied AI is a different story. So all of a sudden you have to think about the implications of your AI product, the data that you're using, and whether your AI product can be weaponized or misled.”

“You call me the expert in ChatGPT. I sort of both love it and hate it. I love it because people like me are starting to get so much attention and I hate it because it's sort of highlighted some areas of potential risk associated with AI that people are only start now starting to realize.”
“I'm very much looking forward to using technologies that can understand code and code patterns and how code gets assembled together and built into a product in a human-like way to be able to sort of detect software vulnerabilities. That's a fascinating area of development and research that's going on right now in our labs.”
“[on AI poisoning] The good news is, this is very difficult to do in practice. A lot of the papers that we see on AI poisoning, they're much more theoretical than they are practical.”
Follow or subscribe to the show on your preferred podcast platform.
Share the show with others in the cybersecurity world.
Get in touch via reimaginingcyber@gmail.com

  continue reading

103 episode

Artwork
iconBagikan
 
Manage episode 359870152 series 3361845
Konten disediakan oleh Reimagining Cyber. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh Reimagining Cyber atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.

This episode features “the expert in ChatGPT”, Stephan Jou. He is CTO of Security Analytics at OpenText Cybersecurity.
“The techniques that we are developing are becoming so sophisticated and scalable that it's really become the only viable method to detect increasingly sophisticated and subtle attacks when the data volumes and velocity are so huge. So think about nation state attacks where you have very advanced adversaries that are using uncommon tools that won't be on any sort of blacklist.”
“In the past five years or so, I've become increasingly interested in the ethical and responsible application of AI. Pure AI is kind of like pure math. It's neutral. It doesn't have an angle to it, but applied AI is a different story. So all of a sudden you have to think about the implications of your AI product, the data that you're using, and whether your AI product can be weaponized or misled.”

“You call me the expert in ChatGPT. I sort of both love it and hate it. I love it because people like me are starting to get so much attention and I hate it because it's sort of highlighted some areas of potential risk associated with AI that people are only start now starting to realize.”
“I'm very much looking forward to using technologies that can understand code and code patterns and how code gets assembled together and built into a product in a human-like way to be able to sort of detect software vulnerabilities. That's a fascinating area of development and research that's going on right now in our labs.”
“[on AI poisoning] The good news is, this is very difficult to do in practice. A lot of the papers that we see on AI poisoning, they're much more theoretical than they are practical.”
Follow or subscribe to the show on your preferred podcast platform.
Share the show with others in the cybersecurity world.
Get in touch via reimaginingcyber@gmail.com

  continue reading

103 episode

Alle Folgen

×
 
Loading …

Selamat datang di Player FM!

Player FM memindai web untuk mencari podcast berkualitas tinggi untuk Anda nikmati saat ini. Ini adalah aplikasi podcast terbaik dan bekerja untuk Android, iPhone, dan web. Daftar untuk menyinkronkan langganan di seluruh perangkat.

 

Panduan Referensi Cepat