On May 14, 2022, a horrific mass shooting took place at a Tops Friendly Markets store in a predominantly Black neighborhood of Buffalo, New York. The white supremacist gunman, 18-year-old Payton Gendron, killed 10 people and wounded 3 others in a racially motivated attack that he had planned for months. Even more disturbingly, Gendron livestreamed the first two minutes of his shooting rampage on Twitch before the platform took down the video.
But those two minutes were enough time for internet users to make screen recordings and widely re-share clips across Twitter, Facebook, Reddit, and other social media platforms. The Buffalo supermarket shooting video went viral, racking up millions of views within hours. On Twitter, one copy posted by the account @BuffaloShooter gained over 50,000 likes and 20,000 retweets before being removed. "This is one of the most disturbing things I‘ve ever seen," one commenter wrote. "Twitter needs to get this taken down ASAP."
The rapid, uncontrolled spread of the graphic Buffalo shooting video on Twitter and other social networks raises urgent questions about content moderation practices and the role of social media in amplifying extremist violence. Time and again, we‘ve seen how digital platforms can be exploited to broadcast mass murder, from the Christchurch mosque shootings to the beheading of journalist James Foley by ISIS. Livestreaming capabilities, algorithmic recommendation systems, and frictionless sharing features allow hateful ideologies and depraved content to circulate faster and farther online than ever before.
Timeline of the Video‘s Spread on Twitter
To understand the scale and speed of the Buffalo shooting video‘s spread, let‘s break down a timeline of how it unfolded on Twitter minute-by-minute:
- 2:28 PM EDT – Payton Gendron begins livestreaming on Twitch as he arrives at Tops Friendly Markets. The first shots are fired.
- 2:30 PM EDT – Twitch removes the livestream.
- 2:32 PM EDT – Screen recordings of the Twitch livestream start getting posted and shared on Twitter, Facebook, Reddit, and 4chan.
- 2:45 PM EDT – Multiple recordings of the video are circulating on Twitter, some with over 10,000 views. Users start tagging @TwitterSafety and @jack to report the content.
- 3:00 PM EDT – Engagement on tweets sharing the video skyrockets. @BuffaloShooter‘s post hits 20,000 likes and 10,000 retweets. Keyword searches for "Buffalo shooting video" and "Tops shooting" start trending.
- 3:30 PM EDT – National news outlets begin reporting on the shooting and the graphic video circulating online. This draws more curiosity and attention to the footage.
- 4:00 PM EDT – @BuffaloShooter and several other top accounts posting the video are suspended. But dozens of copies are still accessible across Twitter.
- 5:00 PM EDT – Twitter releases a statement saying it is "actively working to remove videos" related to the shooting and "monitoring the situation closely." But engagement has already peaked.
By the end of the day, recordings of Gendron‘s livestream had been viewed millions of times on Twitter alone. The video also spread widely on Facebook, YouTube, Instagram, Reddit, and fringe sites like Gab and BitChute known for hosting extremist content. While the major platforms worked to remove the footage, they were playing an endless game of whack-a-mole as users continuously re-uploaded it.
Viewed Millions of Times
So just how many views, shares, and interactions did the Buffalo shooting video rack up across social media? It‘s impossible to know exact numbers since platforms don‘t release granular data, but we can make some informed estimates.
According to one analysis by the Network Contagion Research Institute, the video was uploaded to public Facebook pages and groups over 1,800 times in the first 24 hours. Copies on YouTube gained hundreds of thousands of views before the platform disabled search results. The Internet Archive says it‘s received over 1,400 attempts to archive the video and related manifesto on its Wayback Machine.
On Twitter, researchers at the University of Chicago found that posts sharing the video or linking to it were retweeted over 70,000 times and liked over 180,000 times before removal. "Contrasted with similar incidents in the past like Christchurch and Halle, Twitter acted quite quickly to remove the content. But the damage was already done by the time they took action," lead study author Mitchell Makowsky told me.
To put those numbers in perspective, the Christchurch shooting video was viewed 4,000 times before Facebook removed it, but ended up being uploaded 1.5 million times in the 24 hours after the attack. The scalability of social media ensures that even if only a fraction of shares slip through, graphic content can still reach an immense audience.
The Allure of "Disaster Porn"
So why are so many people drawn to view videos of graphic real-world violence online, even when platforms try to restrict them? Psychologists have long studied the phenomenon of so-called "disaster porn" – the instinct to gawk at footage of catastrophes and carnage.
One theory is that it stems from an evolutionary drive to be hyper-aware of potential threats. In a sense, exposure to danger – whether real or mediated – can make some feel more prepared to face it. "Watching disaster porn essentially allows many people to rehearse feared situations, a form of exposure therapy," clinical psychologist John Mayer explains.
But other researchers argue the attraction is less rational than that. Grisly content triggers our hardwired fight-or-flight response, releasing adrenaline and dopamine. We get a neurochemical "high" from the shock and stimulation. Some psychologists compare disaster videos to a drug hit, delivering a potent dose of excitement without us ever being in actual peril.
Studies have also shown that the "unreal" quality of violent media can emotionally distance viewers from the reality of human suffering. The Buffalo shooting video, with its video-game-like first-person perspective, almost seems designed to induce this surreal detachment. But the impacts of this desensitization on individuals and society could be far-reaching.
Glorification vs. Condemnation
Of course, not everyone who views a mass shooting video does so out of mere morbid fascination. In the case of Buffalo, many people shared the video out of a genuine need to bear witness to racist violence and condemn white supremacy. There‘s an argument that shielding the public from graphic footage sanitizes the brutality of the attack and the social ills underlying it.
But experts warn that recirculating images of the shooter in the act glorifies him and risks inspiring copycats. "Mass shooters design these attacks to go viral. They leave behind manifestos for that very purpose. So we have to be extremely cautious," says psychologist Jaclyn Schildkraut, an expert on mass shootings.
It‘s a complex dilemma and balance that journalists and news outlets have long grappled with. How do you inform the public about a heinous tragedy without playing into the perpetrator‘s desire for attention? At what point does covering their motives and ideology cross over into spreading propaganda?
In the social media age, these editorial quandaries fall to tech companies and content moderation teams that are often ill-equipped to make nuanced judgment calls at scale. Their policies on graphic violence and extremism tend to be reactive, flimsy, and inconsistently applied. Twitter, for instance, prohibits "gratuitous gore" but the definition is hazy. And with millions of tweets per minute, even the best AI systems can‘t catch everything.
The Role of Algorithms
Then there are the algorithmic incentive structures built into social media that actively promote controversial, emotionally provocative content because it drives engagement. Studies have repeatedly shown that outrage, fear, and disgust are the most powerful catalysts of retweets and shares.
So in a perverse way, a graphic mass shooting video is the perfect fodder to go viral online. It hits all the psychological triggers that we‘re evolutionarily primed to pay attention to, and that social media systems are financially rewarded for amplifying.
"The problem is the business model," Tristan Harris, a former Google design ethicist, told The Atlantic. "It‘s the fact that platforms make more money the more time they can get you to spend…It just so happens that content that tears apart social relationships performs really well."
Some experts have called for a complete overhaul of engagement-based algorithms, or at least circuit-breakers that demote certain types of content in extenuating circumstances (like a mass shooting). But platforms have been reluctant to sacrifice the metrics that growth and ad revenue depend on.
Education and Digital Literacy
So what‘s the solution? There‘s clearly no silver bullet for preventing graphic violence from spreading online. Content moderation at scale is an enormous societal challenge we‘re only beginning to grapple with. But many experts believe part of the answer lies in education and digital literacy.
"We need to teach people from a young age to be critical consumers of media, to understand how information spreads online, and to be mindful of what they amplify," says Brendan Nyhan, a political scientist who studies misperceptions and conspiracy theories.
Schools, libraries, and nonprofit groups are doing vital work to equip kids and adults alike with better tools for navigating the modern information landscape. Knowing how to fact-check a source, recognize an emotional manipulation tactic, or avoid knee-jerk sharing can be powerful bulwarks against misinformation and harmful content.
On an individual level, we all have to take responsibility for what we click on and post about online. The next time a graphic video or divisive news story pops up in your feed, pause and ask yourself: what‘s motivating me to engage with this? Am I just indulging morbid curiosity or tribalistic rage? Could my reaction have unintended consequences?
None of us can control how content proliferates in an instant, hyper-connected digital ecosystem. But we can control our own minds and choices. We can resolve to be more reflective and discerning with our attention. Because ultimately, that‘s what provocateurs and platforms seek to hijack: our precious time and mental energy.
Community and Resilience
In the face of unspeakable tragedies like the Buffalo massacre, it‘s easy to despair at the hatred being nurtured online. But we can‘t lose sight of how social media can also be a powerful tool for solidarity, comfort, and collective resilience in times of crisis.
In the days following the shooting, Buffalo community members used Twitter and Instagram to organize vigils, solicit donations for victims‘ families, and share messages of hope and healing. Hashtags like #BuffaloStrong and #StandUpToHate brought people together in a citywide showing of unity against racism and violence.
Platforms may be imperfect, but they can still be forces for good when we wield them consciously and constructively. They can help marginalized voices be heard, connect people across divides, and mobilize social change. It‘s up to all of us to realize that potential, even as we face their capacity for harm.
As I finish this post, I want to acknowledge the deep pain and grief the Buffalo community is experiencing. The devastating loss of life in this cowardly act of bigotry has left scars that will take years to heal. But I hope the way the city has banded together in love and solidarity can be a model for the rest of the nation.
To support the victims‘ families and ongoing anti-racism work in Buffalo, here are some trusted organizations and mental health resources:
- Buffalo 5/14 Survivors Fund
- Buffalo Community Fridge
- Black Love Resists in the Rust
- Mental Health Advocates of WNY
The road ahead will be long and difficult. But as we‘ve seen before, the only way to disarm hate is to confront it head on – with truth, courage, and compassion for each other. Even in our darkest hours, that is a cause for hope.