Data poisoning: how artists are trying to sabotage generative AI

Release Date:

Content created with the help of generative artificial intelligence is popping up everywhere, and it’s worrying some artists. They’re concerned that their intellectual property may be at risk if generative AI tools have been built by scraping the internet for data and images, regardless of whether they had permissions to do so.In this episode we speak with a computer scientist about how some artists are trying novel ways to sabotage AI to prevent it from scraping their work, through what’s called data poisoning, and why he thinks the root of the problem is an ethical problem at the heart of computer science. Featuring Daniel Angus, professor of digital communication at Queensland University of Technology in Australia. Plus an introduction from Eric Smalley, science and technology editor at The Conversation in the US.This episode was written and produced by Katie Flood with assistance from Mend Mariwany. Eloise Stevens does our sound design, and our theme music is by Neeta Sarl. Gemma Ware is the executive producer. Full credits available here. A transcript will be available shortly. Subscribe to a free daily newsletter from The Conversation.Further reading Data poisoning: how artists are sabotaging AI to take revenge on image generatorsAre tomorrow’s engineers ready to face AI’s ethical challenges?To understand the risks posed by AI, follow the moneyFrom shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam  Hosted on Acast. See acast.com/privacy for more information.

Data poisoning: how artists are trying to sabotage generative AI

Title
Data poisoning: how artists are trying to sabotage generative AI
Copyright
Release Date

flashback