Censorship is on the rise across America. With censorship, questions always arise about who gets censored, who gets to do the censoring, and by what standards. Questions also arise about the goals of censorship since we know that censoring can increase the appeal of banned ideas.
The debate about censorship of medical ideas is about to go to the US Supreme Court.
The case arose from certain governmental responses to things that were being posted on social media about the appropriate public health responses to the COVID pandemic. Plaintiffs in the case include two widely respected tenured professors, Jay Battacharya at Stanford and Martin Kulldorf at Harvard. These two physicians, along with two state attorneys general and a few other individuals and organizations, claimed that the government was censoring them by pressuring social media platforms to delegitimize them.
Three issues need to be disentangled. First, whether social media companies themselves can have policies about what gets published on their sites and who can publish there. (Those issues will be clarified by another case coming the Supreme Court this term.) Are they, as private companies, allowed to make their own rules? Or should the government regulate those rules to insure equal access to digital platforms? Can they have rules that, say, prohibit racist hate speech or the live-streaming of mass shootings. And who gets to decide what counts as hate speech?
Second, can the government tell them what rules they should have or how they should interpret those rules. There it is a little more complicated. The government can criticize the social media companies. But they cannot, generally, coerce them into changing their rules or their interpretations and enforcement of those rules. Citizens have a right to free speech. Government officials, technically, do not, at least not when they are acting in their official capacity as government officials. The exception would be when the government claims that public safety or the security of the United States is at stake. There the slope gets very slippery. Was it essential for the security of the country to suppress information about the Korean War, the Vietnam war, or the 9/11 attacks and the invasion of Iraq.
Third, if government officials request that social media companies follow certain rules, and even try to persuade them to do so, when do such actions become coercive? It can be difficult to draw the line between persuasion and coercion when the government is involved. The Appellate Court noted that a government message is coercive—as opposed to persuasive—if it “can reasonably be interpreted as intimating that some form of punishment or adverse regulatory action will follow the failure to accede to the official’s request.” It is not enough for information to be deemed inaccurate, offensive, or opposed to official government policy. In a key case from the 1960s, Chief Justice Earl Warren argued that even “erroneous statements” and “statements criticizing public policy and the implementation of it” must be protected.
Of note, the decision is not about the rights of government officials to promulgate their own messages. THey have the right and the responsibility to do so. But, in pursuit of that goal, they cannot threaten private citizens or private corporations that offer differing opinions. Genevieve Lakier, a University of Chicago law professor clarified the distinction, noting that “I don’t think that White House officials should be writing to platforms and saying, “Hey, take this down immediately.”
The issues can be illustrated by questions about a policy proposal called The Great Barrington Declaration (GBD). The GBD was issued in October of 2020. It criticized certain public health approaches designed to slow the spread of COVID-19. It was a short document. The central claim of the GBD was that the harms of prevailing policies outweighed the benefits. In particular, the Declaration claimed that lockdown policies led to lower childhood vaccination rates, worsening cardiovascular disease outcomes, fewer cancer screenings and deteriorating mental health. The authors were convinced that these measures would disproportionately harm the poor and underprivileged. The authors advocated what they called “focused protection.” In practice, this meant allowing “those who are at minimal risk of death to live their lives normally to build up immunity to the virus through natural infection, while better protecting those who are at highest risk.” Two of the plaintiffs in the case at hand were co-authors of the Great Barrington Declaration and promoted its ideas on social media.
The White House, the CDC, the FBI, and a few other agencies urged the platforms to remove such disfavored content and accounts from their sites. In some cases, the platforms complied. They cooperated with the government in other ways, giving government officials access to an expedited reporting system, downgrading or removing flagged posts, and de-platforming users. The platforms also changed their internal policies to capture more of the type of content that the government deemed problematic and sent steady reports on their activities to government officials. That went on through the COVID-19 pandemic, the 2022 congressional election, and continues to this day.
The Appellate decision includes detailed examples of collusion. “For example, one White House official demanded more details and data on Facebook’s internal policies at least twelve times, including to ask what was being done to curtail “dubious” or “sensational” content, what “interventions” were being taken, what “measurable impact” the platforms’ moderation policies had, “how much content [was] being demoted,” and what “misinformation” was not being downgraded. In one instance, that official lamented that flagging did not “historically mean[] that [a post] was removed.” In another, the same official told a platform that they had “been asking [] pretty directly, over a series of conversations” for “what actions [the platform has] been taking to mitigate” vaccine hesitancy, to end the platform’s “shell game,” and that they were “gravely concerned” the platform was “one of the top drivers of vaccine hesitancy.” Another time, an official asked why a flagged post was “still up” as it had “gotten pretty far.” The official queried “how does something like that happen,” and maintained that “I don’t think our position is that you should remove vaccine hesitant stuff,” but “slowing it down seems reasonable.” Always, the officials asked for more data and stronger “intervention[s].”
The officials argued that they only “sought to mitigate the hazards of online misinformation” by “calling attention to content” that violated the “platforms’ policies,” a form of permissible government speech.
The Plaintiffs maintain that although the platforms stifled their speech, the government officials were the ones pulling the strings—they “coerced, threatened, and pressured [the] social-media platforms to censor [them]” through private communications and legal threats.
Censorship is not the answer. Private companies, like private citizens, have the right to decide what to post or not to post. Government officials do not. For the government to suppress speech, or to pressure others to suppress speech, is an obvious sign of government weakness. If the government cannot convince the citizenry of the appropriateness of its policies, the solution is not to suppress criticism of those policies, it is to better explain the rationale for the policies.
In the case of COVID, many government policies were based on flimsy evidence, speculation, or political values. For example, they allowed bars and restaurants to stay open but prohibited church gatherings or funerals. Post-pandemic analyses show that citizens in states and countries with different policies often fared better. There are legitimate scientific questions that need to be debated.
The censor always believes that he or she is acting in the interest of the community. It is almost never the case, however, that the community benefits from government censorship. The reasons have been straightforwardly articulate by philosopher Bernard Williams who noted that we generally do not know in advance what social, moral, or intellectual developments will turn out to be possible, necessary, or desirable for human beings and for their future. It would be difficult if not impossible to devise a form of words that would reliably separate trash from work of redeeming value.
The boundaries of free speech are indistinct and changing. Coetzee noted that “the liberal consensus on freedom of expression that might once have been said to reign among Western intellectuals and that indeed did much to define them as a community has ceased to obtain.” Lakier thinks that SCOTUS will have a delicate needle to thread, “ With the social-media platforms, it’s been like the Wild West. There are no rules of the road. We have no idea what’s O.K. or not for someone in the White House to e-mail to a platform. One of the benefits of the order and the injunction is that it’s opening up this debate about what’s O.K. and what’s not. there are important free-speech values that are at stake and no one is really doing much to protect them.”
Comments