A new technique developed by Australian researchers could stop unauthorised artificial intelligence (AI) systems learning from photos, artwork and other image-based content.Developed by CSIRO, Australia’s national science agency, in partnership…
A new technique developed by Australian researchers could stop unauthorised artificial intelligence (AI) systems learning from photos, artwork and other image-based content.
Developed by CSIRO, Australia’s national science agency, in partnership with the Cyber Security Cooperative Research Centre (CSCRC) and the University of Chicago, the method subtly alters content to make it unreadable to AI models while remaining unchanged to the human eye.
Defence organisations could shield sensitive satellite imagery or cyber threat data from being absorbed into AI models.
The breakthrough could also help artists, organisations and social media users protect their work and personal data from being used to train AI systems or create deepfakes. For example, a social media user could automatically apply a protective layer to their photos before posting, preventing AI systems from learning facial features for deepfake creation.
The technique sets a limit on what an AI system can learn from protected content. It provides a mathematical guarantee that this protection holds, even against adaptive attacks or retraining attempts.
Dr Derui Wang, CSIRO scientist, said the technique offers a new level of certainty for anyone uploading content online.
“Existing methods rely on trial and error or assumptions about how AI models behave,” Wang said. “Our
Content Original Link:
" target="_blank">