On Thursday, Apple announced that it will begin testing a new system that will automatically match photos on iPhones and uploaded iCloud accounts to a database of child sexual abuse images and alert authorities if necessary.
According to the company, the new service will convert photos on user devices into an unreadable set of hashes (complex numbers) stored on the device. The National Center for Missing and Exploited Children’s database of hashes will be used to match those numbers.
Apple (AAPL) is following in the footsteps of other major tech companies such as Google (GOOG) and Facebook (FB) (FB).
However, it’s also attempting to strike a balance between security and privacy, the latter of which Apple has emphasized as a key selling point for its products.
Some privacy advocates were quick to express their displeasure with the effort.
“Apple is replacing its industry-standard end-to-end encrypted messaging system with an infrastructure for surveillance and censorship, which will be vulnerable to abuse and scope creep not only in the United States, but throughout the world,” says Greg Nojeim, co-director of the Center for Democracy & Technology’s Security & Surveillance Project. “Apple should abandon these changes and restore its users’ faith in the security and integrity of their data on Apple devices and services”.
“Apple’s method… is designed with user privacy in mind,” the company said in a blog post outlining the changes. The tool does not “scan” user photos, according to Apple, and only images from the database will be used as matches. (For example, a user’s innocent photo of their child in the bathtub should not be flagged.)
A device will also create a doubly-encrypted “safety voucher” — a packet of information sent to servers — that will be encoded on photos, according to Apple. Apple’s review team will be notified once a certain number of safety vouchers have been flagged.
The voucher will then be decrypted, the user’s account will be disabled, and the National Center for Missing and Exploited Children will be notified, which will alert law enforcement. Those who believe their accounts were flagged in error can file an appeal to have them reinstated.
Apple’s goal is to ensure that images that are identical and visually similar produce the same hash, even if they’ve been cropped, resized, or converted from color to black and white.
“The reality is that privacy and child protection can co-exist,” John Clark, president and CEO of the National Center for Missing & Exploited Children, said in a statement. “We applaud Apple and look forward to working together to make this world a safer place for children.”
The announcement is part of a larger push by the company to promote child safety. Apple announced on Thursday that a new communication tool will alert users under the age of 18 when they are about to send or receive a message that contains an explicit image. The tool, which must be enabled in Family Sharing, analyzes image attachments and determines whether they are sexually explicit using on-device machine learning. In the event that a child under the age of 13 is about to send or receive a nude image, parents with children under the age of 13 can also turn on a notification feature. Apple has stated that it will be denied access to the messages.