The Rise of the Compliant Speech Platform

Content moderation is becoming a “compliance function,” with trust and safety operations run like factories and audited like investment banks.

The Rise of the Compliant Speech Platform
(Photo: Pixabay, Free Use)

There’s a revolution happening inside platforms like YouTube and Instagram, and it is changing the way those companies govern speech on the internet. Small armies of platform employees, consultants, and vendors are overhauling the daily operations of trust and safety teams—the people who apply platforms’ content rules and decide what speech will remain online. The goal is to make every decision about users’ speech standardized, trackable, and ultimately reviewable by government regulators or courts. Deciding what people can say and do on the internet is increasingly becoming, in corporate-speak, a “compliance” function—like following the laws that govern financial transactions at a bank, or safety inspections at a factory—under laws like the EU’s Digital Services Act (DSA). Whether that fundamental reframing of speech governance will be a net positive for internet users or society remains to be seen. 

How We Got Here

It was not always like this. Once upon a time, operating a speech platform was something like running a bookstore and a bar full of dangerous brawling hooligans all at once. The platform employees who interpreted laws or enforced the companies’ own discretionary rules decided individual, human questions about speech and communication. As a lawyer for Google in 2004, for example, I recall fielding a fraternity’s complaint about the disclosure of its secret handshake one day, and a Turkish politician’s demand that we suppress critical news reporting about him the next. The Turkish politician went away when we declined his request. In other cases, we litigated over removal demands that we thought overstepped national laws and violated users’ rights to speak and access information. 

That artisanal and sometimes combative model of trust and safety persists at some smaller companies. For the biggest platforms, it became largely untenable as the sheer volume of online content grew. By the 2010s, a platform like Facebook or YouTube needed its speech rules to work at an industrial scale. A rule about artistic nudity, for example, had to be concrete and simple enough for tens of thousands of contractors around the world—or even machines—to apply with some modicum of consistency. Speech moderation got scaled up and standardized. Still, the rules that platforms adopted could never quite keep up with the sprawling diversity of speech on the internet, or with the human capacity to say entirely new things, whether wonderful or terrible. 

The current wave of change takes platforms’ existing routinized speech governance to another level entirely. Framing content moderation as a compliance function puts it firmly within a legal practice area that often has stated goals like “avoiding trouble” with regulators and creating a “culture of compliance.” Corporate compliance teams typically do things like building oversight systems to avoid violating anti-bribery laws, tax laws, or securities regulation. Platforms and other companies might use this approach for relatively predictable and formulaic obligations, such as those under privacy and data protection laws like the General Data Protection Regulation (GDPR). Building such systems to reduce complexity and risk by erring on the side of overcompliance often makes sense. Doing more than the law requires might cost the company a little extra money, but it’s worth it to stay out of trouble. And the public may be better off when companies opt to overcomply with laws in areas like privacy. 

Overcompliance with laws about speech is different. Laws like the DSA in Europe and the Digital Millennium Copyright Act in the U.S. already effectively encourage platforms to protect themselves by silencing users’ lawful expression under “notice and takedown” systems for potentially unlawful content. We should be alert to the possibility that the current “compliance-ization” of trust and safety may make that problem worse.

The shift toward compliance-oriented governance of online speech is driven by laws like the EU’s DSA, the U.K.’s Online Safety Act, and Singapore’s Code of Practice for Online Safety. Under the DSA, even a platform with a hundred employees must standardize content moderation so thoroughly that each and every decision can be formally reported to EU regulators and listed in a government-operated database. Every time YouTube removes a comment, for example, it must notify EU regulators in a formal report listing over a dozen state-mandated data points about the decision, and explaining the action by reference to speech policies listed in the database’s technical specifications. A platform can select “gender-based violence” as a basis for moderation, for example, but not sexual-orientation-based violence. And while platform employees can choose to spend extra time elaborating on the basis for their decision, the simpler course is to report—and perhaps ultimately govern user speech in the first place—using the categories provided by regulators. 

The EU’s database now contains over nine billion such reports from platforms, describing content moderation actions carried out since August 2023. This remarkable yet oddly denatured trove of information can tell you, for example, exactly how many times Discord has removed users’ speech as a “risk for public security.” But you can’t see what the affected posts actually said. The real underlying decision and clash of human values is invisible, and only fungible, machine-trackable data points remain.

At the biggest platforms, this state-imposed mania for quantification goes much further. To comply with the DSA, companies like Meta and Alphabet have opened their doors and their databases to sweeping annual audits. Firms like Deloitte or Ernst & Young recently finished assessing platforms’ DSA compliance, including their efforts to mitigate “systemic risks.” The DSA spells out relevant areas of “risk,” including such vast and potentially competing societal priorities as “freedom of expression,” “civic discourse,” “human dignity,” and protecting the “rights of the child.” For trust and safety teams, balancing values like these often means deciding what speech rules to adopt—and then resolving the inevitable hard questions that arise. A platform that prohibits images of child abuse or nudity, for example, might struggle to assess the famous Vietnam war era “napalm girl” photograph. That image has tremendous historic significance, but it also depicts a naked and suffering child. Does a platform better mitigate “systemic risks” by leaving it up, or by taking it down? 

Regulators insist that it is not their job to answer such questions. It isn’t supposed to be auditors’ job, either. Devising speech rules for lawful but potentially harmful content is, officially, up to platforms themselves. Auditors must thus assess the vast array of DSA-relevant risks without imposing any independent judgment about correct rules for speech. As one auditor explained to me, he would not ask, “What is the right rule” but, instead, “What process did you use to arrive at this rule, and what data and metrics justify it?”

If converting nuanced decisions about speech and values into numbers sounds difficult to you, you are not alone. Trust and safety workers have chafed at the requirement to reclassify their editorial decisions into state-mandated categories for purposes of the DSA database, for example. Auditing firmscivil society groups, and platforms have all complained about the effort to turn complex human decision-making into something quantifiable and auditable. Auditors point to the lack of established standards, methodologies, or benchmarks for their assessments. They also object to the law’s requirement that auditors attest to their findings at the highest degree of certainty, exposing the firms to liability if their findings are wrong. The platform audits have been, accordingly, exhaustive—and, in some cases, transformative—for the trust and safety teams being audited. New compliance efforts have shifted priorities, resources, and org charts to focus on the process for making decisions about users’ speech rather than the underlying speech rules or individual decisions affecting users. 

What Could Go Wrong?

Is this upheaval achieving its intended goals? In many ways, compliance-ization of content moderation is justified. The enormous companies that host and control our online lives should be accountable to the public and to governments. That can happen only if their operations, including trust and safety, are designed for consistency and fairness. Unless these systems are rendered visible and comprehensible to outsiders, platforms can only ever check their own homework. Building the standardized processes and tracking systems to make supervision possible serves important goals.

In other ways, the compliance model is a mismatch with the regulation of speech. Human rights systems around the world require that state control and influence over speech be kept to a minimum. The idea that states can govern speech-related “systems” and “metrics” without crossing the line into governing speech itself may yet prove to be dangerously naive. Perhaps more fundamentally, the standardization at the heart of compliance models may simply be inconsistent with enforcing nuanced rules for human expression. The ocean of speech sloshing around the global internet is dynamic and unruly. Trust and safety teams must evolve, experiment, make mistakes, and iterate. They must be adaptable when words like “queer” are used hatefully in one context and as a term of pride in another. Or when cartoon frogs and dishwasher detergent are innocuous one day, sinister the next, and the subject of complex satire and commentary the week after that. Trust and safety work gets harder when teams must confront adversarial actors like spammers, Google bombers, and purveyors of coordinated inauthentic behavior. The optimal responses to these challenges may vary across the hundreds of language groups and untold numbers of subcultures active online. There is something disturbingly robotic in the idea that all this chaotic and generative human chaos can be governed by systems so consistent, automated, and bloodless that they can be run like factories and audited like investment banks.

To be clear, this ship has already sailed. The directional turn toward a compliance approach is largely a done deal at the biggest platforms and may be unavoidable even for smaller ones given laws like the DSA. But platform-watchers should be on the lookout for the downsides and risks of such an approach. I see three major ones.

The first downside is the simplest and the ugliest. Governments can, in the guise of regulating “systems,” easily cross the line into simply telling platforms to suppress lawful speech. We have seen this already in former EU Commissioner Thierry Breton’s public letters demanding that platforms change their content moderation practices. The resulting backlash from civil society will presumably teach governments around the world an important lesson: that voicing censorship demands in public, instead of in private communications with platforms, is a mistake. Even conscientious lawmakers who aspire not to influence platform speech rules seem likely to do so in practice, as platforms attempt to “mitigate” speech-borne risks to regulators’ satisfaction. Channels for state influence of all kinds may become much simpler once platforms stop viewing decisions about “lawful but awful” speech as an area beyond government purview and, instead, as a regulatory compliance matter akin to accounting or securities disclosures.

The second concern is about competition and missed opportunities to evolve past our current internet ecosystem. DSA compliance is expensive. For platforms of all sizes, it is currently pushing trust and safety resources out of substantive decision-making about user posts and into newly mandated procedures. One moderator for a smaller platform told me that her team now spends too much time on work “that is not keeping anybody safe.” Another told me that the new processes leave little room for “blue-sky thinking” or innovation in handling online threats, given the time-consuming internal process required for even modest or experimental changes. 

More broadly, process mandates like those of the DSA favor incumbents who have already built large teams and expensive tools for content moderation, and who can field teams of policy specialists in Brussels to negotiate new standards like the EU’s pending guidance on protection of minors. Smaller competitors may find themselves held to standards that make sense for Facebook but are a mismatch for their own business models. Some may exit the global playing field by simply blocking European users, effectively forfeiting that key market. Others may turn to the growing number of newly minted vendors offering to outsource DSA compliance. I expect these vendors, along with the auditors assessing the biggest platforms, to be a force for standardization across platforms—making each platform more like its competitors and leaving internet users with fewer options for platforms with diverse product features, speech rules, or moderation systems.

The third problem relates to the old business axiom, “what gets measured gets managed.” For some regulatory undertakings, it is relatively clear what metrics are important. Municipal water systems should measure for contaminants. U.S. banks should track and report transactions above a certain dollar value. But even for relatively straightforward endeavors, focusing on the wrong data can backfire. Surgeons who are evaluated based on successful outcomes may avoid taking on hard cases. Teachers may train students to take tests, but not think beyond them. As accounting historian H. Thomas Johnson put it, “Perhaps what you measure is what you get. More likely, what you measure is all you’ll get. What you don’t (or can’t) measure is lost.” 

The risk of focusing on the wrong metrics seems far greater in systems for which even the definition of success—be it in moderating specific user posts, or in striking systemic balances between safety and freedom—is contested. Some trust and safety professionals have told me they expect regulators to focus on the “turnaround time” required to resolve complaints about content, for example, because it is easy to quantify and sounds like a neutral measure of service quality. But the distortions introduced by such time-based metrics are already notorious for encouraging rationally self-interested moderators to make choices that are bad for everyone else. For example, platform speech rules that can be enforced using a single click in moderators’ user interface may be prioritized over rules that require a moderator to spend precious seconds on additional clicks. One 1990s business guru described this problem similarly to platform employees criticizing recent regulations. “Tell me how you measure me[,]” he wrote, “and I will tell you how I will behave. If you measure me in an illogical way … do not complain about illogical behavior.”

Compliance and Complex Systems

Observers of complex systems have noted the limitations of standardization for well over a century. Frederick Taylor celebrated the efficiency of reducing “factory labor into isolable, precise, repetitive motions” in the 19th century, while Karl Marx lamented the same developments as “despotism.” Max Weber mocked the idea that human conflict might one day be resolved by automated “judges” that can assess complaints and then “eject the judgment together with the more or less cogent reasons for it[.]” James C. Scott, describing Germany’s catastrophic attempts at “scientific forestry” in the 18th century, wrote that the “intellectual filter necessary to reduce the complexity to manageable dimensions” inevitably led to real-world outcomes that served some interests (in that case, short-term timber revenues) while utterly neglecting others (in that case, ecological diversity and the survival of forests). 

Platform trust and safety teams’ attempts to govern users’ speech and behavior are complex systems in the extreme. Regulation of those efforts is appropriate and inevitable. But it would be hubristic to think that we know what standardized systems or metrics can meaningfully measure success. 

– Daphne Keller directs the Program on Platform Regulation at Stanford’s Cyber Policy Center. Her work, including academic, policy, and popular press writing, focuses on platform regulation and Internet users’ rights in the U.S., EU, and around the world. She was previously Associate General Counsel for Google, where she had responsibility for the company’s web search products. She is a graduate of Yale Law School, Brown University, and Head Start. Published courtesy of Lawfare

No Comments Yet

Leave a Reply

Your email address will not be published.

©2024. Homeland Security Review. Use Our Intel. All Rights Reserved. Washington, D.C.