Artwork

Konten disediakan oleh New Books Network. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh New Books Network atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.
Player FM - Aplikasi Podcast
Offline dengan aplikasi Player FM !

Helga Nowotny, "In AI We Trust: Power, Illusion and Control of Predictive Algorithms" (Polity, 2021)

48:16
 
Bagikan
 

Manage episode 318052269 series 2421470
Konten disediakan oleh New Books Network. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh New Books Network atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.

Today I talked to Helga Nowotny about her new book In AI We Trust: Power, Illusion and Control of Predictive Algorithms (Polity, 2021).

One of the most persistent concerns about the future is whether it will be dominated by the predictive algorithms of AI - and, if so, what this will mean for our behaviour, for our institutions and for what it means to be human. AI changes our experience of time and the future and challenges our identities, yet we are blinded by its efficiency and fail to understand how it affects us.

At the heart of our trust in AI lies a paradox: we leverage AI to increase our control over the future and uncertainty, while at the same time the performativity of AI, the power it has to make us act in the ways it predicts, reduces our agency over the future. This happens when we forget that that we humans have created the digital technologies to which we attribute agency. These developments also challenge the narrative of progress, which played such a central role in modernity and is based on the hubris of total control. We are now moving into an era where this control is limited as AI monitors our actions, posing the threat of surveillance, but also offering the opportunity to reappropriate control and transform it into care.

As we try to adjust to a world in which algorithms, robots and avatars play an ever-increasing role, we need to understand better the limitations of AI and how their predictions affect our agency, while at the same time having the courage to embrace the uncertainty of the future.

Galina Limorenko is a doctoral candidate in Neuroscience with a focus on biochemistry and molecular biology of neurodegenerative diseases at EPFL in Switzerland. To discuss and propose the book for an interview you can reach her at galina.limorenko@epfl.ch.

Learn more about your ad choices. Visit megaphone.fm/adchoices

Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/technology

  continue reading

846 episode

Artwork
iconBagikan
 
Manage episode 318052269 series 2421470
Konten disediakan oleh New Books Network. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh New Books Network atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.

Today I talked to Helga Nowotny about her new book In AI We Trust: Power, Illusion and Control of Predictive Algorithms (Polity, 2021).

One of the most persistent concerns about the future is whether it will be dominated by the predictive algorithms of AI - and, if so, what this will mean for our behaviour, for our institutions and for what it means to be human. AI changes our experience of time and the future and challenges our identities, yet we are blinded by its efficiency and fail to understand how it affects us.

At the heart of our trust in AI lies a paradox: we leverage AI to increase our control over the future and uncertainty, while at the same time the performativity of AI, the power it has to make us act in the ways it predicts, reduces our agency over the future. This happens when we forget that that we humans have created the digital technologies to which we attribute agency. These developments also challenge the narrative of progress, which played such a central role in modernity and is based on the hubris of total control. We are now moving into an era where this control is limited as AI monitors our actions, posing the threat of surveillance, but also offering the opportunity to reappropriate control and transform it into care.

As we try to adjust to a world in which algorithms, robots and avatars play an ever-increasing role, we need to understand better the limitations of AI and how their predictions affect our agency, while at the same time having the courage to embrace the uncertainty of the future.

Galina Limorenko is a doctoral candidate in Neuroscience with a focus on biochemistry and molecular biology of neurodegenerative diseases at EPFL in Switzerland. To discuss and propose the book for an interview you can reach her at galina.limorenko@epfl.ch.

Learn more about your ad choices. Visit megaphone.fm/adchoices

Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/technology

  continue reading

846 episode

Semua episode

×
 
Loading …

Selamat datang di Player FM!

Player FM memindai web untuk mencari podcast berkualitas tinggi untuk Anda nikmati saat ini. Ini adalah aplikasi podcast terbaik dan bekerja untuk Android, iPhone, dan web. Daftar untuk menyinkronkan langganan di seluruh perangkat.

 

Panduan Referensi Cepat