How the truth was murdered

Even then, Cross says, the people who were best able to talk about why these campaigns took hold and what might stop them—that is, the people under attack—were not taken seriously as experts. She was one of them, both writing about Gamergate and being targeted by it. Media attention to online abuse gathered pace after Gamergate, Mitchell told me, for a simple reason: “When you finally paid attention, you paid attention when a white woman was being targeted, but not when a Black woman was being targeted.”

And as some companies began trying to do something about abuse, those involved in such efforts often found themselves becoming the targets of exactly the same kind of harassment.

When Ellen Pao took over as CEO of Reddit in 2014, she oversaw the site’s first real attempt to confront the misogyny, racism, and abuse that had found a home there. In 2015, Reddit introduced an anti-harassment policy and then banned five notorious subreddits for violating it. Redditors who were angry at those bans then attacked Pao, launching petitions calling for her resignation. She ended up stepping down later that year and is now a campaigner for diversity in the technology industry.

Pao and I spoke in June 2020, just after Reddit banned r/The_Donald, a once-popular pro-Trump subreddit. For years it had served as an organizing space to amplify conspiracy-­fueled, extremist messages, and for years Pao had urged Reddit’s leadership to ban it. By the time they finally did, many of its subscribers had already moved off the site and on to other platforms, like Gab, that were less likely to crack down on them. 

“It’s always been easier not to do anything,” Pao told me. “It takes no resources. It takes no money. You can just keep doing nothing.”

A constant deluge

It’s not as if the warnings of Pao, Cross, and others have only just penetrated mainstream consciousness, though. The flood waters come back again and again.  

The Friday before Donald Trump was elected in 2016, another conspiracy theory—one that would, in about a year’s time, help create QAnon—trended on Twitter. #SpiritCooking was easy to debunk. Its central claims were that Hillary Clinton’s campaign chair, John Podesta, was an occultist, and that a dinner hosted by a prominent performance artist was actually a secret satanic ritual. The source of the theory was an invitation to the dinner in Podesta’s stolen email archives, which had been released publicly by WikiLeaks that October. 

I wrote about misinformation during the 2016 elections, and watched as #SpiritCooking evolved into Pizzagate, a conspiracy theory about secret pedophile rings centered on pizza shops in Washington, DC. Reddit banned a Pizzagate forum in late November that year for “doxxing” people (i.e., putting their personal information online). On December 4, 2016, exactly one month after #SpiritCooking exploded, a North Carolina man walked into a DC restaurant targeted by Pizzagate believers, lifted up his AR-15 rifle, and opened fire. 

These first few months after the 2016 election marked another point in time—much like today—when the flood of disinformation was enough to get more people than usual to notice. Shocked by Trump’s election, many worried that foreign interference and fake news spread on social media had swayed voters. Facebook CEO Mark Zuckerberg initially dismissed this as “a pretty crazy idea,” but ensuing scrutiny of social-media platforms by the media, governments, and the public revealed that they could indeed radicalize and harm people, especially those already vulnerable. 

And the damage continued to grow. YouTube’s recommendation system, designed to get people to watch as many videos as possible, led viewers down algorithmically generated tunnels of misinformation and hate. On Twitter, Trump repeatedly used his huge platform to amplify supporters who promoted racist and conspiratorial ideologies. In 2017, Facebook introduced video livestreaming and was shortly overwhelmed by live videos of graphic violence. In 2019, even before covid-19, vaccine misinformation thrived on the platform as measles outbreaks spread across the US. 

“Choosing to have people whose main objective is to constantly spew hate speech… that’s a decision. No one has forced them to make that decision.”

The tech companies responded with a running list of fixes: hiring enormous numbers of moderators; developing automated systems for detecting and removing some kinds of extreme content or misinformation; updating their rules, algorithms, and policies to ban or diminish the reach of some forms of harmful content. 

But so far the toxic tide has outpaced their ability—or their willingness—to beat it back. Their business models depend on maximizing the amount of time users spend on their platforms. Moreover, as a number of studies have shown, misinformation originates disproportionately from right-wing sources, which opens the tech platforms to accusations of political bias if they try to suppress it. In some cases, NBC News reported in August, Facebook deliberately avoided taking disciplinary action against popular right-wing pages posting otherwise rule-breaking misinformation. 

Many experts believed that the next large-scale test of these companies’ capacity to handle an onslaught of coordinated disinformation, hate, and extremism was going to be the November 2020 election. But the covid pandemic came first—a fertile breeding ground for news of fake cures, conspiracy theories about the virus’s origin, and propaganda that went against common-sense public health guidelines. 

If that is any guide, the platforms are going to be largely powerless to prevent the spread of fake news about ballot fraud, violence on the streets, and vote counts come Election Day. 

The storm and the flood

I’m not proposing to tell you the magical policy that will fix this, or to judge what the platforms would have to do to absolve themselves of this responsibility. Instead, I’m here to point out, as others have before, that people had a choice to intervene much sooner, but didn’t. Facebook and Twitter didn’t create racist extremists, conspiracy theories, or mob harassment, but they chose to run their platforms in a way that allowed extremists to find an audience, and they ignored voices telling them about the harms their business models were encouraging.

Sometimes these calls came from within their own companies and social circles. 

When Ariel Waldman, a science communicator, went public with her story of Twitter abuse, she hoped she’d be the last person to be the target of harassment on the site. It was May 2008.

By this point she’d already tried privately for a year to get her abusers removed from the platform, but she remained somewhat optimistic when she decided to publish a blog post detailing her experiences.  

After all, she knew some of the people who had founded Twitter just a couple of years earlier. 

“I used to hang out at their office, and they were acquaintances. I went to their Halloween parties,” Waldman told me this summer. There were models for success at the time, too: Flickr, the photo-sharing website, had been extremely responsive to requests to take down abusive content targeting her. 

So she wrote about the threats and abuse hurled at her, and detailed her emails back and forth with the company’s founders. But Twitter never adequately dealt with her abuse. Twelve years later, Waldman has seen the same pattern repeat itself year after year. 

“Choosing to have people whose main objective is to constantly spew hate speech and harm other people on a platform—that’s a decision. No one has forced them to make that decision,” she says. 

“They alone make it. And I feel that they increasingly act as if—you know, that it’s more complicated than that. But I don’t really think it is.” 

I don’t know what to tell you about how to stop the flood. And even if I did, it wouldn’t undo the considerable damage from the rising waters. There have been permanent effects on those voices who were turned into footnotes as they tried to warn the rest of us. 

Today, Mitchell notes, the same groups that engaged in mob campaigns of abuse and harm have reframed themselves as the victims whenever there are calls for major social-media platforms to silence them. “If they have had the right to run amok for all that time, then you take that away from them—then they feel like they’re the ones who are oppressed,” she says. “While no one pays attention to the people who are actually oppressed.” 

sharing misinformation


One path toward making things better could involve providing more incentive for companies to do something. That might include reforming Section 230, the law that shields social-media companies from legal liability for user-posted content. 

Mary Anne Franks, a professor at the University of Miami who has worked on online harassment, believes that a meaningful reform of the law would do two things: limit the reach of those protections to speech rather than conduct, and remove immunity from companies that knowingly benefit from the viral spread of hate or misinformation. 

Pao notes that companies might also take these issues more seriously if their leadership looked more like the people being harassed. “You’ve got to get people with diverse backgrounds in at high levels to make the hard decisions,” she says, adding that that’s what they did at Reddit: “We just brought in a bunch of people from different racial and ethnic backgrounds, mostly women, who understood the problems and could see why we needed to change. But right now these companies have boards full of white men who don’t push back on problems and focus on the wrong metrics.”

Phillips, of Syracuse, is more skeptical. You Are Here, a book she published with her writing partner Ryan Milner earlier this year, frames online abuse and disinformation as a global ecological disaster—one that, like climate change, is rooted deeply in human behavior, has a long historical context, and is now all-encompassing, poisoning the air.

She says that asking technology companies to solve a problem they helped create cannot work. 

“The fact of the matter is that technology, our networks, the way information spreads, is what helped facilitate the hell. Those same things are not what’s going to bring us out of it. The idea that there’s going to be some scalable solution is just a pipe dream,” Phillips says. “This is a human problem. It is facilitated and exacerbated exponentially by technology. But in the end of it, this is about people and belief.”

Cross concurs, and offers a tenuous hope that awareness is finally shifting. 

“It’s impossible for people to deny that this has, like sand, gotten into everything, including the places you didn’t know you had,” she says. 

“Maybe it will cause an awakening. I don’t know how optimistic I am, but I feel like at least the seeds are there. The ingredients are there for that sort of thing. And maybe it can happen. I have my doubts.”