Much has been said about Apple’s recently announced “Child Safety” initiatives. The focus has been (rightly so) on the perceptions of user privacy and implications of content processing happening on user devices versus cloud servers. Child safety proponents are lauding Apple for doing more to prevent the spread of harmful material, while privacy advocates are concerned about user data being scanned and reported without consent. Missing in the conversation, however, is a cybersecurity perspective on the potential for authoritarian regimes to abuse these new systems.

Apple’s controversial initiatives are three different but related services involving child sexual abuse material (CSAM): detecting sexually explicit material in messages, identifying illegal material in uploaded photos, and providing guidance to CSAM resources in search. There has been widespread confusion and conflation about how the technology works, what is scanned, and how Apple will report users flagged for possessing CSAM. The subsequent conversations have highlighted an important intersection of security and privacy for individuals, companies, and governments.

Without a doubt, the internet has created a thriving global marketplace for CSAM. Platform owners, network operators, and government agencies around the world have partnered to detect, remove, and punish criminals who engage in the creation and trading of CSAM. It has been a success. The same can be said for using similar technology and partnerships to combat terrorism. But technology companies and governments will not restrict the “detect and report” playbook to only CSAM and terrorism. The greater risk to billions of users worldwide is that the tools developed to counter CSAM and terrorism will inevitability be used to combat “others” as defined by individual countries. Authoritarian regimes are poised to exert legal control over device makers and service providers to use detection systems to suppress dissidents and undermine democratic institutions through targeted censorship. Individual users are particularly at-risk of not only increased government surveillance capabilities, but the exploitation of vulnerabilities by adversaries or private sector spyware companies to surreptitiously plant illicit material on victim devices.

Apple has gone to great lengths technically to prioritize user privacy by offloading the actual analysis of the material to the user’s device rather than on Apple’s cloud servers. The benefit is that this method significantly reduces the opportunity for Apple or anybody with access to Apple’s system to ever see the actual material that is being scanned. Normally digital service providers like Google or Facebook scan files, email, and media uploaded to their platforms. The distinction is that material on those platforms is scanned on the service provider’s servers, not user devices. Regardless of where the scanning occurs, scrutiny should be focused on the outcome of the scanning and what action is taken with the reported results.

Simply creating the CSAM detection technology creates the opportunity for authoritarian regimes to exploit this well-intentioned capability. Calls by those regimes to demand the ability to detect other types of material are inevitable despite the moral commitments of the company that creates the scanning technology. Digital technology companies have the capability to engineer robust security features such as end-to-end encryption that prevent even the company from accessing user data. Law enforcement and other government actors try to chip away at this security using a three-step recipe of exemptions: children, terrorists, and “others.” Even such well-intentioned efforts can have unintended consequences. Authoritarian regimes seeking to tamp down critics and limit the viability of opposition candidates use a similar playbook. For example, the German NetzDG content moderation framework was emulated by several less-than-democratic countries, including Kyrgyzstan and Singapore, to remove content in broad categories like “fake news,” “defamation of religions,” and “anti-government propaganda.”

In fact, democratic countries may be further enabling this type of oppressive behavior through well-intentioned legislation being proposed. One example is the recently introduced bipartisan bill targeting Apple and Google app stores in the United States. If passed, it would prevent the companies from excluding third-party apps from accessing application programming interfaces (APIs) and could grant vendors or governments greater ability to scan data on individual devices. Governments could require that flagged material be reported to other account holders or government entities. Ultimately, every company faces the decision of operating within the legal framework of a jurisdiction or not operating within that jurisdiction at all.

In short, the cybersecurity of individuals is at greater risk now that this technology exists. There are distinct avenues for potential abuse that become viable with the knowledge that such a detection and reporting system will become available on 1 billion iOS devices. For example, anyone with access to software that exploits vulnerabilities to gain root-level access can leverage the reporting and detection system to damage a targeted user or group of users. Surveillance software, like that created by NSO Group, could provide a perpetrator (criminal, activist, or government agent) with the access necessary to hypothetically transfer known illicit material to a targeted device without the victim knowing. The victim would then be flagged and investigated by authorities using the detection of illicit material as legal pre-text for suppression or other law enforcement activity. The reputational damage to the victim could be unrecoverable.

It would not be difficult for a government to potentially abuse this system on a large scale. Some countries have already shown a willingness to force Apple to make regional or country-specific exemptions to global App Store policies. China has Cybersecurity and Data Security laws in place that permit strict government control over how technology companies handle data, including by requiring them to store Chinese citizens’ data locally and to provide the government with access to encrypted data. Censoring text and images of controversial or perceived derogatory topics such as Xi/Winnie the Pooh is commonplace on Chinese social media platforms and search engines. This pressures companies like Apple to exceed legal obligations in even innocuous cases like personalized engravings that contain words the Chinese government does not approve of, like “resist,” “politics,” or “8964,” on Apple accessories. Russia has already exerted an unusual level of control in Apple’s App Store by requiring iPhones sold in Russia to prompt users to install government approved, Russian-made apps at setup. Thailand’s military has used the Computer Crime Act—a law that provides the government with the power to monitor networks, devices, and data without court warrants during government-mandated “emergency actions”—to crack down on dissident behavior. Apple’s system of shifting media processing onto customer devices would potentially create a mechanism for the Thai government to force those devices to self-report government-deemed “illicit material” against the wishes of the customers. In Germany, the display of Nazi imagery or transfer of Nazi items, including on social media, is punishable by jail time. It would be trivial for the German government to request that Apple configure the CSAM detection system to flag hate symbols such as swastikas in order to prevent any flagged images or videos from being exported from the device using the common operating system APIs, such as the standard Share Sheet or browser upload feature. Every country could develop specific laws related to illicit materials that are impossible to enforce without technology companies like Apple building on-device detection systems.

Apple continues to face intense criticism of their CSAM detection system because the unintended consequences of creating such a system may have grave consequences for minority groups and individuals speaking out against discrimination and authoritarian regimes. Security researchers are keen to learn more about the inner workings of the NeuralHash algorithm at the foundation of the CSAM detection system, just as adversaries are preparing methods to circumvent or co-op the detection system for nefarious purposes. Only a handful of companies have the vertical integration that combines hardware, software, and services capability needed to develop such a detection system. Those companies like Apple should more fully consider how the creation of a detection system could put users at greater security risk because those companies, above all else, will “abide by the laws in all of the countries” in which they operate.

The views expressed in GMF publications and commentary are the views of the author alone.