A federal judge’s decision this week to limit the government’s communication with social media platforms could have far-reaching side effects, according to researchers and groups that fight hate speech, online abuse and misinformation: It could further hamper efforts to curb harmful content.
Alice E. Marwick, a researcher at the University of North Carolina at Chapel Hill, was one of several disinformation experts who said Wednesday that the decision could hamper work aimed at preventing false claims about vaccines and voter fraud.
The order, she said, followed other efforts, mostly by Republicans, who are “part of an organized campaign pushing back against the idea of disinformation altogether.”
Judge Terry A. Doughty issued a preliminary injunction on Tuesday, saying that the Department of Health and Human Services and the Federal Bureau of Investigation, along with other parts of the government, must stop corresponding with social media for “the purpose of inciting, encouraging, pressuring or inducing in any way the removal, deletion, suppression or reduction of content containing protected free speech.”
The ruling stemmed from a lawsuit by the attorneys general of Louisiana and Missouri, which accused Facebook, Twitter and other social media sites of censoring right-wing content, sometimes in league with the government. They and other Republicans cheered the judge’s move, in the U.S. District Court for the Western District of Louisiana, as a victory for the First Amendment.
Several researchers, however, said the government’s work with social media was not a problem as long as it did not force them to remove content. Instead, they said, the government has historically tipped off companies about potentially dangerous messages, such as lies about election fraud or misleading information about Covid-19. Most misinformation or disinformation that violates the policies of social platforms is flagged by researchers, non-profits or people and software at the platforms themselves.
“That’s the really important distinction here: The government should be able to tell social media companies about things they think are harmful to the public,” said Miriam Metzger, a communications professor at the University of California, Santa Barbara, and an affiliate of its Center. for Information Technology and Society.
A bigger concern, researchers said, is a potential chilling effect. The judge’s decision prevented some government agencies from communicating with some research organizations, such as the Stanford Internet Observatory and the Election Integrity Partnership, about removing content from social media. Some of those groups have already been targeted in a Republican-led legal campaign against universities and think tanks.
Their peers said such conditions could dissuade younger scholars from researching disinformation and scare off donors who fund crucial grants.
Bond Benton, an associate communications professor at Montclair State University who studies disinformation, described the decision as “a bit of a potential Trojan horse.” It is limited on paper to the relationship of the government with social media platforms, he said, but carried a message that misinformation qualifies as speech and its removal as the suppression of speech.
“Before, platforms could just say we don’t want to host it: ‘No shirt, no shoe, no service,'” Dr. Benton said. “This decision is now likely to make platforms a little more cautious about that.”
In recent years, platforms have relied more on automated tools and algorithms to spot harmful content, limiting the effectiveness of complaints from people outside the companies. Academics and anti-disinformation organizations have often complained that platforms are not responsive to their concerns, said Viktorya Vilk, the director of digital security and free expression at PEN America, a nonprofit that supports free expression.
“Platforms are very good at ignoring civil society organizations and our requests for help or requests for information or escalation of individual cases,” she said. “They are less comfortable ignoring the government.”
Several disinformation researchers worried the ruling could sway social media platforms, some of which have already scaled back their efforts to curb disinformation, to be even less vigilant ahead of the 2024 election. They said it was unclear how relatively new government initiatives that presented concerns and suggestions from researchers, such as the White House Task Force to Address Online Harassment and Abuse, would do.
For Imran Ahmed, the chief executive of the Center to Combat Digital Hate, Tuesday’s decision highlighted other problems: the US’s “particularly lax” approach to dangerous content compared to places like Australia and the European Union, and the need to upgrade. rules governing the responsibility of social media platforms. The ruling on Tuesday cited the center making a presentation to the surgeon general’s office about its 2021 report on online anti-vaccine activists, “The Misinformation Dozen.”
“It’s bananas that you can’t show a nipple in the Super Bowl, but Facebook can still broadcast Nazi propaganda, empower stalkers and bullies, undermine public health and facilitate extremism in the US,” Mr Ahmed said. “This court decision further exacerbates that sense of impunity that social media companies operate, despite the fact that they are the main vector for hate and misinformation in society.”