By some counts, the “Crypto Wars” have been going on for a half century. Starting in the 1970s with claims that publication of cryptographic research would endanger national security, and continuing over whether the National Security Agency (NSA) should control the development of cryptographic standards for non-national security agencies, the Crypto Wars then focused on export control. In effect, this determined whether U.S. products with strong cryptography would be exported—and thus whether there would be strong cryptography available within the U.S.
When the European Union and the United States loosened cryptographic export controls at the turn of the millennium, it looked as if the technologists—who had fought for this—had won. By the end of the 2000s, major manufacturers began encrypting data on consumer devices, including phones, and, shortly afterward, in communications. The latter involved end-to-end encryption (E2EE), a method of securing communications so that only the message endpoints (the sender and receiver of the message) can view the unencrypted message.
But law enforcement has always taken issue with encryption, including E2EE, being widely publicly available. In 1995, FBI Director Louis Freeh told Congress that “[d]rug cartels, terrorists, and kidnappers will use telephones and other communications media with impunity.” As the years passed, wiretapping experienced “function creep,” the phenomenon in which the uses for a particular surveillance technology expand continually.
By the 2000s, the FBI argued that law enforcement was increasingly “Going Dark,” unable to conduct investigations because of encryption and lack of access to communications and data. In the 1990s, the bad guys were organized crime, drug dealers, and kidnappers, while in the 2000s, they were “predators who exploit the most vulnerable among us[,] … violent criminals who target our communities[,] … a terrorist cell using social media to recruit, plan, and execute an attack.” Now, in what appears to be a cynical ploy by law enforcement officials across the globe, the newest fight against E2EE is being waged on the basis of the need to prevent the internet-enabled spread of child sexual abuse material (CSAM).
Understanding CSAE
Child sexual abuse and exploitation (CSAE), which includes online child sexual trafficking and video streaming sexual abuse, is a serious problem. The internet has enabled sharing of CSAM, sexual trafficking of children, and video streaming of real-time sexual abuse of children. Some of these horrific activities appear to have greatly increased in recent years—although some of that increase may be due to better reporting. To have a clear-headed—as opposed to a highly emotional—discussion of the issues, it is critical to get at the facts. That is the purpose of this piece.
I’ll start with a reprise of the CSAE situation and the technologies used to detect and investigate the associated crimes. Next I’ll discuss E2EE and securing today’s online world. To clarify the conflict between investigating online CSAE and ensuring the wide availability of E2EE, I’ll examine the recent spate of bills in the U.K., EU, and U.S. aimed at tackling online CSAE, including the recently enacted U.K. Online Safety Act. In order to tell a complete story, some of this is a reprise of articles I’ve written in the past.
The U.S. National Center for Missing and Exploited Children (NCMEC) is a federally funded clearinghouse for information on missing and sexually abused children. The organization collects and shares evidence of CSAE with authorized parties; it is one of the world’s leading providers of information on online occurrences of CSAE. In existence since 1984, NCMEC has seen online reporting of CSAE cases grow explosively. The organization received 29 million reports of online sexual exploitation in 2021, a 10-fold increase over a decade earlier. Meanwhile the number of video files reported to NCMEC increased over 40 percent between 2020 and 2021.
Understanding the meaning of the NCMEC numbers requires careful examination. Facebook found that over 90 percent of the reports the company filed with NCMEC in October and November 2021 were “the same as or visually similar to previously reported content.” Half of the reports were based on just six videos. That the reports involved far fewer than 29 million different instances of child sexual exploitation does not make the CSAE problem go away. Each occurrence of a photo or video showing a child being sexually abused, even if it is a previous one shared hundreds of thousands of times, is harmful, for such showing increases the chance that an abused person will be recognized as having been the subject of CSAE.
To understand how to prevent CSAE and interdict its perpetrators, we need to examine what CSAE really is. In 2022, Laura Draper provided an excellent report that laid this out while also examining how CSAE might be investigated even in the face of E2EE.
Draper observed that CSAE consists of four types of activities exacerbated by internet access: (a) CSAM, which is the sharing of photos or videos of child sexual abuse imagery; (b) perceived first-person (PFP) material, which is nude imagery taken by children of themselves and then shared, often much more widely than the child intended; (c) internet-enabled child sex trafficking; and (d) live online sexual abuse of children. One of Draper’s points was that interventions to prevent the crime and to investigate it vary by the type of activity.
Consider the sharing of CSAM photos and videos. In a study Facebook conducted in 2020-2021, the company evaluated 150 accounts that the company reported to NCMEC for having uploaded CSAE content. Researchers found that more than 75 percent of those sharing CSAM “did not do so with … intent to hurt the child.” Instead, they were sharing the images either out of anger that the images existed or because of finding the images “humorous.” Two of Draper’s suggestions—simplified methods for CSAM reporting and alerting users that sharing CSAM images carries serious legal consequences—seem likely to reduce that particular type of spread.
Other types of interventions would work for other types of cases of CSAE. For example, in PFP sharing of nude images, the child herself—the vast preponderance of those doing PFP are female—is not committing a crime. They also may not see a problem with sharing a nude photo of themselves. In 2021 Thorn, an international organization devoted to preventing child sexual abuse, reported that 34 percent of U.S. teens aged 13 to 17 saw such sharing as normal and that this was also true for 14 percent of children between ages 9 and 12. Draper pointed out that by empowering a child to report an overshared photo, law enforcement investigators would have a head start on investigating and thwarting this and related crimes.
Or consider the particularly horrific crime in which there is live streaming of a child being sexually abused according to requests made by a customer. The actual act of abuse often occurs abroad. In such cases, aspects of the case can be investigated even in the presence of E2EE. First, the video stream is high bandwidth from the abuser to the customer but very low bandwidth the other way, with only an occasional verbal or written request. Such traffic stands out from normal communications; it looks neither like a usual video communication nor a showing of a film. And the fact that the trafficker must publicly advertise for customers provides law enforcement another route for investigation. But investigations are also often stymied by the in-country abuser being a family member or friend, making the child reluctant to speak to the police (this is also the case for so-called child sex tourism, in which people travel with the intent of engaging in sexual activity with children).
The point is that each of these types of cases is investigated with quite different techniques. I will stop here; for a thorough discussion of the prevention and investigation of CSAE crimes, read Draper’s report (or see my summary in Lawfare).
End-to-end encryption hides communications content from everyone but the sender and recipient, and its use can complicate CSAE investigations. But as we have seen, by digging deeper into the different kinds of crimes that constitute CSAE, one finds that there are successful prevention and investigative techniques that do not involve breaking encryption. Thus, while it is easy to blame encryption for the CSAE problem, E2EE technology is not the root of the child sexual abuse problem. As Jim Baker, former FBI general counsel, wrote in 2019,
The substantial increase in offenses against children over the years and the inability of law enforcement to effectively address the problem represents a complex systemic failure with multiple causes and many responsible parties, not the least of which are the producers and consumers of such material. There is plenty of blame to go around for society’s colossal collective failure to protect children; encryption is only one of the challenges.
The Benefits of E2EE
It’s time to look at the other side of the coin, namely the protections that E2EE affords society.
E2EE provides confidentiality to online communications. Although EE2E is important for privacy, its uses are much broader than that. Wide availability of encryption tools is extremely important for providing personal, business, and national security. However, to be useful, encryption tools have to be not only widely available but also easy to use. The ubiquity of E2EE in applications such as WhatsApp, iMessage (though this is only for iPhone-to-iPhone messaging), and Signal makes it simple to deploy. Thus, over the past decade as these tools have become increasingly available, more and more people have used E2EE. This includes congressional use. In 2017, the Senate sergeant at arms approved the use of Signal by Senate staff.
Good public policy decision-making requires weighing competing interests. In 2019, the Carnegie Endowment for International Peace published the results of an encryption study. Study members included several people who had held senior government posts in law enforcement and national security, academics, and representatives from civil society and industry interests (disclosure: I was a participant). In our report, we noted, “Security in the context of the encryption debate consists of multiple aspects including national security, public safety, cybersecurity and privacy, and security from hostile or oppressive state actors. The key is determining how to weigh competing security interests.”
Many national security leaders strongly agree with the move to broader availability of E2EE. In 2015, former NSA Director Mike McConnell, former Department of Homeland Security Secretary Michael Chertoff, and former Deputy Defense Secretary William Lynn wrote in the Washington Post that “the greater public good is a secure communications infrastructure protected by ubiquitous encryption at the device, server and enterprise level without building in means for government monitoring.” In 2016, former NSA and CIA Director Michael Hayden told an interviewer that “we are probably better served by not punching any holes into a strong encryption system—even well-guarded ones.” That same year, Robert Hannigan, the former head of the U.K.’s General Communications Headquarters, the U.K.’s signals intelligence agency, argued for keeping encryption strong: “I am not in favour of banning encryption. Nor am I [] asking for mandatory ‘back doors’.”
Such arguments are part of the reason behind the change in cryptographic export controls in 1999 and 2000. It’s why encryption controls did not return in the face of terrorist attacks such as in Paris and San Bernardino. In 2015, the Obama administration opted not to pursue a legislative “solution” to the encryption problem of locked devices. The decided lack of public statements by the U.S. national security establishment in support of law enforcement’s stance on encryption is also notable.
The argument for ending E2EE to prevent CSAE fails to respect that balance needed in weighing competing public interests. Curbing the use of E2EE to prevent CSAE would be like prohibiting the use of airbags because their deployment can injure short people. The action may help protect those shorter in stature but only at the higher societal cost of failing to prevent far more injuries of a more serious nature.
Yet despite the strong support of the national security community and economic arguments for wide public availability of E2EE, in the name of seeking to reduce the online spread of CSAE, some legislators are strongly pressing to stop the use of E2EE—but doing so in a way that disguises some of the consequences of proposed legislation.
Deficient Legislative Efforts
Several current bills in the EU and U.S. appear to permit the use of E2EE. Carefully interpreting the language shows this to be otherwise. I’ll start by looking at the recently enacted U.K. law.
The U.K. Online Safety Act has many parts. The part that affects encryption requires that “all UK provider[s] of a regulated user-to-user service must operate the service using systems and processes which secure (so far as possible) that the provider reports all detected and unreported CSEA [Child Sexual Exploitation and Abuse] content present in the service” (§ 67 (1)). (All non-U.K. providers of regulated services must do the same for any U.K.-linked CSEA content.) Providers must secure their services using an “accredited technology” (§ 122(2)(iii) and (iv)). This technical accreditation will be done by Ofcom, the U.K. communications regulator and rough equivalent of the U.S. Federal Communications Commission.
Here’s where legislative wishes conflict with scientific reality.
In November 2022, Ofcom published a report detailing the complexities of developing a CSAE-recognizing technology that has a reasonable accuracy standard. As I explain in the next few paragraphs, there is currently no known technology that can do what the new law requires.
Ofcom started by discussing the current technology of choice for recognizing known CSAE: perceptual hashing. This is a technique that assigns a file, such as a piece of music or a visual image, a string of bits; two files are tagged as similar if their perceptual hashes differ in relatively few places. The perceptual hash is calculated by first breaking the file into many small pieces and then a “hash function” is calculated on each of the bits. For example, this function might be based on how light or dark the image in that small piece is, with “00” being white, “01,” light gray, “10,” dark gray, and “11” being black. The perceptual hash is an ordered linking together of these. This method enables image recognition even if an image is changed slightly (e.g., through resizing or graying of the image). Systems in use—Microsoft’s PhotoDNA, YouTube’s CSAI Match, Apple’s NeuralHash, Meta’s PDQ and TMK-PDQ—all use perceptual hashing to recognize known CSAM (both photos and videos).
But a problem arises with this solution. Software for image editing can manipulate an image so that it looks much the same to a human but has a quite different perceptual hash. Such manipulations would lead to a false negative for a CSAE image. False positives—images that look nothing alike but have very similar or even the same perceptual hashes—are also possible. This leaves an opening for mischief, and worse. It is unfortunately too easy to arrange for, say, a candidate for elective office, to receive a photo that looks innocuous, store it, and only later learn that the photo triggered a law enforcement alert because its perceptual hash was the same as that of known CSAM. Damage would be high and may not go away (recall Pizzagate).
Would such an “attack” be feasible? Yes. Shortly after a researcher published the code used in Apple’s NeuralHash, an Intel researcher produced a hash “collision”: two images that look nothing alike but have the same perceptual hashes. Such capabilities are present for researchers—and others, especially those with an incentive to cause problems. As computer scientists Carmela Troncoso and Bart Preneel observed, “In the arms race to develop such detection technologies, the bad guys will win: scientists have repeatedly shown that it is easy to evade detection and frame innocent citizens.”
Other proposed techniques to recognize CSAE, including previously unknown examples, include machine learning. But as my co-authors and I discussed in “Bugs in our Pockets,” false positives and false negatives are a problem here too.
The impact of false positives can be grueling on those accused. While for some types of criminal investigations, once the person is cleared, the taint may go away, that is often not the case for accusations of CSAE. Such accusations come with real—and often permanent—costs for the innocent people who have been accused. Their families may feel the effects as well.
The frequency of false positives in practice is difficult to know because of the problems of studying the CSAE issue. It is illegal to store CSAE material, making studies on CSAE activity hard to come by. However, data from the Irish Council for Civil Liberties (ICCL) sheds some light on the issue. Responding to an information request from ICCL, the Irish police reported that NCMEC had provided 4,192 referrals in 2020. Of these, 409 of the cases were actionable and 265 cases were completed. Another 471 referrals were “Not Child Abuse Material.” The Irish police nonetheless stored “(1) suspect email address, (2) suspect screen name, [and] (3) suspect IP address.” Now 471 people have police records because a computer program incorrectly flagged them as having CSAM.
With that background in place, let us return to Ofcom’s response to the Online Safety Act. In its 2022 report on the feasibility of using current technology for finding CSAE, Ofcom reported that “[r]esearch has demonstrated that applications of perceptual hashing can be limited due to several inherent properties, such as non-negligible inaccuracy (i.e. false positives and false negatives).” Then, when the British Parliament was debating the passage of the Online Safety Act, the junior minister for arts and heritage stated that the government would “only require companies to scan their networks when a technology was developed that was capable of doing so.” As Ofcom had already stated, perceptual hashing, the primary method for identifying CSAE, was inadequate “due to its inherent properties.” In other words, the legislature could ignore the objections of the technologists that E2EE was incompatible with the act’s reporting requirements because Ofcom would not put forth an accredited technology. Such technology did not exist.
There is plenty of wiggle room in the phrase “capable of doing so.” In recent years, we have seen many governments, including well-respected democracies, ignore scientific reality in climate change, coronavirus protections, and other issues to score political points. But to pass a law requiring the use of a technology that doesn’t exist—and that many believe cannot be developed—is duplicitous and dangerous.
In the case of the U.K. law, this wiggle room could result in a choice by startups to implement a less secure form of encryption than E2EE out of a concern that the government might later require them to replace the E2EE protections. Thus, the result of the Online Safety Act could well leave U.K. residents less safe, not more so.
Meanwhile, both the EU and U.S. are pressing forward with legislation that, much like the Online Safety Act, is willing to sacrifice E2EE in the name of child safety. None of these bills explicitly prohibits E2EE. Instead, they present requirements effectively preventing the technology’s use without explicitly saying so.
The EU proposal requires that if any hosting service or provider becomes aware of “any information indicating potential online child sexual abuse on its services,” they must report it or face legal consequences. The word “potential” is key. How does a provider become aware of CSAE on its site unless it scans all the material on its site—and that material is unencrypted?
The U.S. Congress has two bills under discussion that would also eliminate E2EE : Stop CSAM Act and the EARN IT Act.
The Stop CSAM Act allows platforms to be held liable for “intentional, knowing, or reckless promotion or facilitation” of CSAE. With the use of E2EE technology, knowing whether CSAE is on the site is not necessarily possible. Thus, this bill effectively prohibits the use of E2EE.
The same is the case for the EARN IT Act. Here the bill appears to support Section 230, which largely provides immunity to internet services for the third-party content they host. But there’s a catch: The bill would remove liability protections for any CSAE on the site. But providers can know if there is CSAE material on the site only if they scan the site’s content. Putting it more bluntly: no E2EE-protected material on internet sites.
The bills’ proponents state that technologies will be developed to satisfy the requirements, technologies that can function even with E2EE in place. But as I’ve already discussed, there are not any such technologies at present. And most computer security researchers doubt such technology can be developed. This is a situation of putting out a legislative cart that requires the existence of a unicorn to pull it.
This is not the first time policymakers have pushed technical mandates prior to there being an effective working solution. An example of this is the 1994 Clipper Chip initiative for escrowed encryption with keys stored with agencies of the U.S. government. There were strong political objections to the proposal, but there were also serious concerns about security and technical viability. A 1996 National Academies of Sciences study on cryptography policy noted that prior to requiring any form of escrowed encryption, it was essential to obtain operational experience with a large-scale infrastructure to do this. In other words, first show that there is a technology that works in practice, then legislate. The U.K. law—and the EU and U.S. proposals—have this backward.
The Path Forward
The findings from the Carnegie study on encryption policy provide guidance for governments to move forward on the problems E2EE poses to CSAE investigations. In spite of differences in the committee on the issue of E2EE encryption and locked devices, we found much common ground:
- Security has many forms and is intertwined with privacy and equity.
- Assess the range of impacts.
- Attend to international dynamics.
- Think long term.
- Accept imperfection.
- Place encryption within a broader context of law enforcement capabilities.
- Recognize there is no purely technical approach.
- Recognize the challenge of implementation.
- Balance the need for a strategic approach and the need for technical detail.
The last two points echo the issue raised by the 1996 National Academies study about first determining that there is a technology that can effectively do what the policy seeks, then legislating. It’s a point ignored by the U.K. in passing the Online Safety Act as well as both the EU and the U.S. Congress in debating such bills now. In the name of child safety, these bills would eviscerate end-to-end encryption and thus weaken public safety.
I would like to end this tour of the current E2EE and CSAE situation with a discussion of the principled approach that Apple has taken on the issue. It’s a strong stand—and one that holds true to science and privacy.
In the past, Apple has led the industry on securing devices and encrypting communications. Apple urged the U.K. government to amend the Online Safety bill to protect E2EE. Apple also indicated it might remove its encrypted communications products iMessage and FaceTime from the U.K. if the Online Safety bill passed. Signal, an app that provides E2EE for phone, email, and text, has announced a similar intention. (The lack of a current U.K. plan to require technological changes for §67(1) means that even though the Online Safety Act is now law, Apple and Signal are not making changes at this time.)
The stance was an interesting change for Apple. In 2021, the company had developed two systems to combat CSAE. Their “client-side scanning systems,” which work on iPhones, would use perceptual hashing and machine learning technologies to prevent CSAE. One system would scan iMessages for nude images being sent to or from children on Family Sharing plans, while the second would check photos being uploaded to the iCloud for CSAM. If a child under 13 nonetheless chose to receive or send such a photo after being warned of its content, Apple’s 2021 plan involved notifying the parents of the child’s action. Public concern by civil liberties organizations, privacy advocates, and computer scientists caused Apple to pause on implementing these two projects in the fall of 2021. In December 2022, the company abandoned them.
Apple has now taken a public stance on this. “Scanning every user’s privately stored iCloud data would create new threat vectors for data thieves to find and exploit,” said Apple’s director of user privacy and child safety, Erik Neuenschwander. He explained that the scanning technology was not “practically possible to implement without ultimately imperiling the security and privacy of our users.” Neuenschwander elaborated, “It would also inject the potential for a slippery slope of unintended consequences. Scanning for one type of content, for instance, opens the door for bulk surveillance and could create a desire to search other encrypted messaging systems across content types.”
Apple has it exactly right. As I and some of my colleagues have written, LGBQT children often question their sexuality before they discuss it with their parents (some never do, in fact, due to issues of safety at home). Outing them, which the Apple technology would have done, would have imperiled them, not made them safer. And having a child’s phone report their activities to their parents would instill the notion that online surveillance is acceptable—surely not a lesson we want to teach children.
The fact that an Apple-like system can be repurposed for such censorship and surveillance made the Apple 2021 product effort even more problematic. The EU has documented instances in which spyware has been used to “destroy media freedom and freedom of expression” in Hungary and to silence government critics in Poland. Client-side scanning technologies could also be used for such purposes. In publicly backing away from client-side scanning to a privacy-protective product regime, Apple took a strong and highly principled stance that protected people’s privacy and security—and, in so doing, society’s.
Think differently. Think long term. Think about protecting the privacy and security of all members of society—children and adults alike. By failing to consider the big picture, the U.K. Online Safety Act has taken a dangerous, short-term approach to a complex societal problem. The EU and U.S. have the chance to avoid the U.K.’s folly; they should do so. The EU proposal and the U.S. bills are not sensible ways to approach the public policy concerns of online abetting of CSAE. Nor are these reasonable approaches in view of the cyber threats our society faces. The bills should be abandoned, and we should pursue other ways of protecting both children and adults.
– Susan Landau is Bridge Professor in The Fletcher School and Tufts School of Engineering, Department of Computer Science, Tufts University, and is founding director of Tufts MS program in Cybersecurity and Public Policy. Landau has testified before Congress and briefed U.S. and European policymakers on encryption, surveillance, and cybersecurity issues. LAWFARE