by Chris Ciaccia at The Daily Mail
Data privacy campaigners are raging today over Apple’s plans to automatically scan iPhones and cloud storage for child abuse images and report ‘flagged’ owners to the police after a company employee has looked at their photos.
The new safety tools will also be used to look at photos sent by text messages to protect children from ‘sexting’, automatically blurring images Apple’s algorithm’s could detect child sexual abuse material [CSAM].
But campaigners have accused the tech giant of opening a new back door to accessing personal data and ‘appeasing’ governments who could harness it to snoop on citizens.
While the measures are initially only being rolled out in the US, Apple plans for the technology to soon be available in the UK and other countries worldwide.
But the controversial plans have already been blasted as a ‘huge and regressive step for individual privacy’ over fears the system could easily be adapted to spot other material and is open to abuse.
Greg Nojeim of the Center for Democracy and Technology in Washington DC said that ‘Apple is replacing its industry-standard end-to-end encrypted messaging system with an infrastructure for surveillance and censorship.’
The iPhone maker said the new detection tools have been designed to protect user privacy and do not allow the tech giant to see or scan a user’s photo album.
Instead, the system will look for matches, securely on the device, based on a database of ‘hashes’ – a type of digital fingerprint – of known CSAM images provided by child safety organizations.
As well as looking for photos on the phone, cloud storage and messages, Apple’s personal assistant Siri will be taught to ‘intervene’ when users try to search topics related to child sexual abuse.
The technology will allow Apple to:…
Continue Reading