What are the measures that Apple would put in place and why are people against it? Financial Express Online explains.

Apple Child Safety Measures: Tech giant Apple recently announced that it would be taking some measures to ensure Child Safety, using technology to hamper the spread of Child Sexual Abuse Material or CSAM. However, this has garnered criticism from many across the world, with an open letter with over 4,000 signatures doing rounds online, appealing to the iPhone maker to reconsider its stance and stop the rollout of the technology it would be using for these measures. What are the measures that Apple would put in place and why are people against it? Financial Express Online explains.
Apple’s measures against CSAM
Apple, in a recent blog post, spoke about protecting children from the downsides of the internet. It wrote, “We want to help protect children from predators who use communication tools to recruit and exploit them, and limit the spread of Child Sexual Abuse Material (CSAM).”
For this, it said that it would be bringing new features to ensure child safety in three different areas. The first among these would be “new communication tools” with the help of which parents would be able to “play a more informed role in helping their children navigate communication online. The Messages app will use on-device machine learning to warn about sensitive content, while keeping private communications unreadable by Apple”.
Explaining this further, Apple said that the Messages app would receive new tools that would warn children as well as parents about sexually explicit images, whether it is being received or sent. “When receiving this type of content, the photo will be blurred and the child will be warned, presented with helpful resources, and reassured it is okay if they do not want to view this photo,” Cupertino said. It added, “As an additional precaution, the child can also be told that, to make sure they are safe, their parents will get a message if they do view it. Similar protections are available if a child attempts to send sexually explicit photos. The child will be warned before the photo is sent, and the parents can receive a message if the child chooses to send it.” However, the iPhone maker clarified that for this, Massages would use machine learning on the device, which would mean that Cupertino would not have access to any of the messages.
The second measure that the company plans to incorporate is cryptography applications for iOS and iPadOS so that the spread of CSAM online can be curbed. “CSAM detection will help Apple provide valuable information to law enforcement on collections of CSAM in iCloud Photos,” it said. CSAM is content that is sexually explicit and involves a child in it. “To help address this, new technology in iOS and iPadOS* will allow Apple to detect known CSAM images stored in iCloud Photos. This will enable Apple to report these instances to the National Center for Missing and Exploited Children (NCMEC),” Cupertino added. However, Apple clarified that the technology would not compromise on user privacy.
“Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users’ devices,” it said.
Basically, what this would do is match suspicious images on the device with known CSAM images database from child safety organisations. The process would be performed on-device before the images are stored on iCloud. The result of this process would not be revealed, but a “cryptographic safety voucher” encoding the match results and encrypted data about the image would be created by the Apple device and stored on the phone, and would be uploaded to iCloud with the image backup. Then a technology called “threshold secret sharing” would come into play to ensure that Apple would not be able to interpret the content of this voucher unless the iCloud Photos account crosses a threshold of known CSAM content.
“The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account. Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images. Apple then manually reviews each report to confirm there is a match, disables the user’s account, and sends a report to NCMEC. If a user feels their account has been mistakenly flagged they can file an appeal to have their account reinstated,” Cupertino stated. This way, it said, that Apple would only know about those images that have confirmed CSAM content and it would not have access to any other images, even for those users who have known CSAM content in their images.
The last measure that Apple is planning to incorporate is expansion of Siri and Search to provide parents and children with information to help them in case they encounter unsafe situations. Moreover, Siri and Search would also intervene in cases where a user were to look up CSAM-related topics. “For example, users who ask Siri how they can report CSAM or child exploitation will be pointed to resources for where and how to file a report,” Cupertino said, moreover adding that “interventions will explain to users that interest in this topic is harmful and problematic, and provide resources from partners to get help with this issue”.
The updates are likely to come later this year to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey.
The move has been lauded by actor Ashton Kutcher who has actively been involved in advocacy work to bring an end to child sex trafficking for a decade now. Kutcher said that Apple’s move is a major step forward.
I believe in privacy – including for kids whose sexual abuse is documented and spread online without consent. These efforts announced by @Apple are a major step forward in the fight to eliminate CSAM from the internet. https://t.co/TQIxHlu4EX
— ashton kutcher (@aplusk) August 5, 2021
Several people have, however, expressed concern over Apple’s move, writing an open letter asking Cupertino to reconsider its technology rollout. Several people have said that this rollout would undo decades of work that technologists, academics and policy advocates have done to preserve privacy. One of the most forthcoming critics is Will Cathcart, the head of Facebook-owned WhatsApp, who said that Apple’s approach is concerning, and lauded how WhatsApp tackles this issue on the basis of “user reports” and has reported over 4 lakh cases to the NCMEC.
WhatsApp’s hit back against Apple is almost expected because of the tiff that is ongoing between Facebook and Apple due to Apple’s iOS update that basically hit hard on Facebook’s targeted ad business. Moreover, WhatsApp’s review of user reports is dubious at best if the reporting system of sister platforms Facebook and Instagram are anything to go by, on which arbitrary decisions regarding objectionable content are taken.
Epic CEO Tim Sweeney has also pushed back against the policy, but he is also involved in a lawsuit against Apple. He spoke out against Apple “vacuuming up everyone’s data into iCloud by default”. While his current tweet did not have anything specific against the issue at hand, he promised to share detailed thoughts on that later.
It’s atrocious how Apple vacuums up everybody’s data into iCloud by default, hides the 15+ separate options to turn parts of it off in Settings underneath your name, and forces you to have an unwanted email account. Apple would NEVER allow a third party to ship an app like this.
— Tim Sweeney (@TimSweeneyEpic) August 6, 2021
However, Cathcart and Sweeney are not the only ones. Several others, like Johns Hopkins University associate professor Matthew Green, journalist Edward Snowden, and politician Brianna Wu all spoke about how this could be exploited by the governments and compromise user privacy.
These tools will allow Apple to scan your iPhone photos for photos that match a specific perceptual hash, and report them to Apple servers if too many appear.
— Matthew Green (@matthew_d_green) August 5, 2021
Apple plans to modify iPhones to constantly scan for contraband:
“It is an absolutely appalling idea, because it is going to lead to distributed bulk surveillance of our phones and laptops,” said Ross Anderson, professor of security engineering. https://t.co/rS92HR3pUZ
— Edward Snowden (@Snowden) August 5, 2021
This is the worst idea in Apple history, and I don’t say that lightly.
It destroys their credibility on privacy. It will be abused by governments. It will get gay children killed and disowned. This is the worst idea ever. https://t.co/M2EIn2jUK2
— Brianna Wu (@BriannaWu) August 5, 2021
Meanwhile, Harvard’s Cyberlaw Clinic instructor Kendra Albert pointed out that parents finding out about children viewing or sending sexually explicit content could be unsafe for queer kids, including them getting kicked out of their homes, beaten or worse. This issue was also raised by Wu.
The idea that parents are safe people for teens to have conversations about sex or sexting with is admirable, but in many cases, not true. (And as far as I can tell, this stuff doesn’t just apply to kids under the age for 13.)
— Kendra Albert (@KendraSerra) August 5, 2021
The problem with the opposition to Apple’s policy is that, though legitimate, most of them are only looking at governmental angles and user privacy and does not provide alternatives (except Cathcart) to the widely prevalent issue of child sexual abuse. Technology, while having its upsides, also has its pitfalls. Technology is becoming interspersed with lives at an early age and children are not aware at young ages of how to ensure that they are safe.
An example of this exploitation is that since technology enables fans, including young children, to have a platform to interact with their favourite celebrities, they are quite excited to get responses from their idols. People with malintentions exploit this, with many people forming accounts impersonating celebrities and claiming that it is the “celebrity’s” personal account. These accounts are private, but accept the follow request of any user, and when asked, claim that they are the celebrity. Since they respond to all messages, younger children are at the risk of being trapped under the impression that they are talking to their idol and often end up in situations where they have been exploited by these impersonators. It is a widespread issue, especially on Instagram.
Child sexual abuse is a matter of serious concern, and if technology is aiding to such activities increasing, it is also the responsibility of technology companies to provide a solution to this.
In this matter, writer Matt Blaze has given the most logical and balanced reaction. He said that with this, Cupertino is likely to face tremendous governmental pressures to expand its technology beyond CSAM so that any content that governments don’t deem fit can be curbed, which would undoubtedly be a violation of freedom of expression. He said, “In other words, not only does the policy have to be exceptionally robust, so does the implementation.” He also further enumerated the challenges it could face, and concluded, “It’s really easy to caricature this as “Apple is trying to invade your privacy” but I really don’t see evidence of that here. But the problem they’re trying to solve is HARD, with multiple dimensions, in ways that are easy to underestimate.”
In other words, not only does the policy have to be exceptionally robust, so does the implementation.
— matt blaze (@mattblaze) August 6, 2021
Get live Stock Prices from BSE, NSE, US Market and latest NAV, portfolio of Mutual Funds, Check out latest IPO News, Best Performing IPOs, calculate your tax by Income Tax Calculator, know market’s Top Gainers, Top Losers & Best Equity Funds. Like us on Facebook and follow us on Twitter.
Financial Express is now on Telegram. Click here to join our channel and stay updated with the latest Biz news and updates.