In July 2021, a reporter asked President Joe Biden if he had a message for platforms like Facebook. “They kill people,” he replied. “The only pandemic we have is the unvaccinated. And they kill people.”
His then-press secretary, Jen Psaki, and legions of Democrats rushed to his defense, saying Biden was referring to the so-called ‘disinformation’ spread on the platform by the ‘misinformation dozen’. – a dozen accounts deemed responsible for the vast majority of the platform’s vaccine-skeptical content. But Biden, PSAki and other members of the administration have frequently used White House podiums to make bold and inaccurate statements about the harm caused by social media companies, either implicitly suggesting or more explicitly demanding that these companies change their content moderation practices in accordance with administration preferences.
“Facebook needs to act faster to remove harmful posts,” Psaki said from his official perch, adding that “you shouldn’t get banned from one platform and not others for providing misinformation,” like whether one platform had any control over the decisions made by the other.
This type of government pressure, along with “intimidation, threat and cajoling” that attempts to “influence the decisions of private platforms and limit the publication of adverse speech” is known as jaw, and is the subject of a new report by Will Duffield, policy analyst at the Cato Institute. “Left unchecked, [jawboning] threatens to normalize itself as an extraconstitutional method of speech regulation.
“Jawboning occurs when a government official threatens to use their power – whether it be the power to prosecute, regulate, or legislate – to compel someone to take action that the state official cannot. No. Jawboning is dangerous because it allows government officials to assume powers not granted to them by law.”
In the summer of 2021, for example, “Surgeon General Vivek Murthy issued an advisory on health misinformation, including eight guidelines for platforms,” following comments from Psaki and Biden. “By itself, the advisory would have been harmless, but statements from other members of the administration have suggested penalties for non-compliant platforms,” Duffield writes.
“White House Communications Director Kate Bedingfield wrapped up the effort in a breathtaking morning joe interview. Prompt with a question about removing Section 230, she replied, “We’re looking at that, and they definitely should be held accountable, and I think you’ve heard the president speak very aggressively about that…” intermediary liability protections that social media platforms rely on, Bedingfield added a vague threat to the administration’s demands. …
By raising the specter of changes or repeal of Section 230, the Biden administration has made a devious threat. Repeal of Section 230 would not make vaccine misinformation illegal, but it would hurt Facebook by exposing it to litigation over its users’ speech. By demanding the removal of misinformation and threatening repeal, the administration sought to bully Facebook into removing speech that the government could not touch.”
Duffield has worked to track prominent examples of jaws (database found here, filled with examples of censorship attempts by politicians from both parties). Although “not all requests are associated with a threat,” he writes, “all requests are made during discussions of possible regulation of social media.”
“In general, we know a lot of jaw dropping happens behind closed doors,” Duffield said. Raison. “There were, before 2016, a few high-profile cases like Wikileaks,” he notes, “but the current era of platform normalization really begins after the 2016 election.” It points to Sen. Dianne Feinstein’s (D-California) social media crackdown attempts on alleged Russian misinformation in 2017, as well as the possibility that Twitter’s decision to limit the spread of the laptop story of Hunter Biden was due to how the company had come under fire, trotted before Congress and then stepped up their materials piracy policy. “If they haven’t taken that down, and it turns out to be an overseas operation, and it changes the course of the election, they’re going to testify before Congress again, hammered in regulations and d ‘fines,’ noted disinformation researcher Clint Watts, according to the Cato report.
Some platforms have largely resisted demands from government officials, whether explicit or implicit. Cloud-based instant messaging service Telegram, Duffield notes, is “notorious for ignoring both requests and court orders,” but that’s likely because the platform isn’t based in the states. -United ; it therefore has less reason to comply with US laws than platforms like Facebook and Twitter. (Telegram, it should be noted, beats competitors like Facebook Messenger and WhatsApp in much of Eastern Europe and the Middle East, including, interestingly, Ukraine and Russia.)
But the case of Telegram is unfortunately an exception to the rule. Whether it’s congressional staffers pressuring social media workers to make certain content moderation calls behind closed doors or sitting U.S. senators asking tech CEOs to testify before them — and to stand quiet for their demagoguery and harassment — “it is not the job of Congress to oversee, second guess or direct the decisions of private intermediaries,” Duffield writes. “Such oversight assumes a role in the regulation of speech that the Constitution specifically denies to Congress.”
It’s not just Section 230 and changes to liability protections that members of Congress and Biden administration officials are threatening corporations with. “While some changes to Section 230 would change how platforms moderate speech, antitrust would hurt or dismember the non-compliant business,” Duffield says.
This could well be the ominous bipartisan future of the Jaw.