人兽性交

Skip to main content Accessibility
The Intelligence Report is the 人兽性交's award-winning magazine. Subscribe here for a print copy.

Move Slow and Break Everything

Silicon Valley's Year in Hate

On Oct. 26, 2018, authorities arrested 56-year-old Cesar Sayoc in South Florida after he mailed 14 crudely constructed pipe bombs to outspoken critics of President Trump over the span of five days.聽


Broward County Sheriff's Office/AP Images

Before mailing the bombs, Sayoc trafficked in hate and conspiracy online.

His social media accounts were dedicated to extreme right-wing conspiracy theories attacking prominent liberals, such as philanthropist George Soros and Bill and Hillary Clinton. He also regularly circulated conspiratorial content about undocumented immigrants and Islamic terrorism, and he reportedly told a former co-worker, 鈥淓verybody that wasn鈥檛 white and wasn鈥檛 a white supremacist didn鈥檛 belong in the world.鈥

One subject of Sayoc鈥檚 harassment, Rochelle Ritchie, a regular Fox News panelist, reported one of Sayoc鈥檚 threats to Twitter鈥檚 Trust and Safety team.

鈥淪o you like make threats,鈥 Sayoc wrote to Ritchie from his account @hardrock2016. 鈥淲e Unconquered Seminole Tribe will answer your threats. We have nice silent Air boat ride for u here on our land Everglades Swamp. We will see you 4 sure. Hug your loved ones real close every time you leave you home.鈥

Twitter responded on Oct. 11, 2018, informing Ritchie that Sayoc鈥檚 threat did not violate the company鈥檚 policies on abusive behavior. Eleven days later, pipe bombs began arriving at the homes of Sayoc鈥檚 enemies.

It took five days and Sayoc鈥檚 arrest on charges including 鈥渋llegal mailing of explosives, threats against former presidents and others and assaulting current and former federal officers鈥 for Twitter to acknowledge that it had made an error. Only then did the company delete Sayoc鈥檚 account.

This episode is merely a single, recent example highlighting Silicon Valley鈥檚 inability 鈥 and unwillingness 鈥 to address rampant abuse of social media platforms by extremists and the increasingly violent consequences.

There is a disconnect between the point of view of tech company leaders and the toxicity the users on their platforms have to endure. It was on plain display in July 2018 when Mark Zuckerberg, CEO of Facebook, told Recode that those who deny the Holocaust weren鈥檛 鈥渋ntentionally getting it wrong.鈥 This stunning moment of myopia received swift criticism.

Deborah Lipstadt, the Dorot Professor of Holocaust Studies at Emory University and a world-renowned expert on denialism, called Zuckerberg鈥檚 comments 鈥渂reathtakingly irresponsible.鈥 The German Justice Ministry rebuked the CEO, reminding the company that Holocaust denial is punishable by law in Germany and that Facebook is presently liable for such content.

Zuckerberg鈥檚 remarks on Holocaust denial 鈥 which he later clarified, stating, 鈥淚 personally find Holocaust denial deeply offensive, and I absolutely didn鈥檛 intend to defend the intent of people who deny that鈥 鈥 came 77 days after the company consented to an independent civil rights audit to assess its impact on underrepresented communities and communities of color.

Showing its total misunderstanding of the gravity of the situation, Facebook announced its civil rights audit at the same time it announced a separate audit into whether the company censors conservative voices, to be led by Republican Sen. Jon Kyl of Arizona. That Facebook would make an equivalency between civil rights and possible censoring of conservatives, and then choose Sen. Kyl, who has a history of appearing with anti-Muslim extremists, was an astounding move.

Highly motivated hate groups are metastasizing on the internet and social media despite warnings from civil rights organizations, victims of harassment and bigotry online, and those involved in increasingly deadly tragedies. Tech companies do not fully understand hate groups and how they affect users. And their actions remain insufficient.

How Hate Groups Really Use Social Media

Tech companies fail to take seriously the hate brewing on their platforms, waiting until hate-inspired violence claims its next victim before taking meaningful enforcement action.


Charleston County Sheriff's Office/AP Images

This was the case after Dylann Roof murdered nine black parishioners in 2015 and after the riot that left one dead and nearly 20 injured in Charlottesville, Virginia, at the 鈥淯nite the Right鈥 rally in 2017.

Energized by the racist 鈥渁lt-right鈥 and emboldened by the Trump administration, the rally was the largest public demonstration of white supremacists in a generation. The rally was organized on Facebook, as was, for a period of time, its sequel in August 2018.

Despite several crackdowns, tech companies struggle to understand and address how hate groups use online space to organize, propagandize and grow.

Most of the white nationalist and neo-Nazi groups operating today are no longer traditionally structured organizations led by known figures.

Instead, hate groups are now mainly loosely organized local chapters formed around propaganda hubs, operating almost entirely online. These local-level chapters 鈥 such as the neo-Nazi Daily Stormer鈥檚 鈥淏ook Clubs,鈥 the white nationalist The Right Stuff鈥檚 鈥淧ool Parties,鈥 or even the violent neo-Nazi Atomwaffen Division鈥檚 regional cells 鈥 imbibe propaganda and trade in violent, hate-laden content and rhetoric meant to dehumanize their targets and desensitize online consumers.

The increase in white nationalist and neo-Nazi hate group chapters in 2018, as well as the uptick in their propaganda campaigns and violence, shows that these digitally savvy groups are flourishing in spite of Silicon Valley鈥檚 promises to police them.

Facebook CEO Mark Zuckerberg, left, and Twitter CEO Jack Dorsey, right, have failed to quash the hate on their platforms. Andrew Harnik/AP Images (Zuckerberg); Jose Luis Magana/AP Images (Dorsey)

Another Year of Inconsistent Enforcement

As user growth accelerated on social media platforms, so, too, did the presence of hate groups and their leaders, finally able to transcend geographic boundaries and enjoy unprecedented audiences for their carefully tuned propaganda.

While official groups and pages for hate organizations and some of their leaders have been deplatformed, tech companies have allowed countless unofficial groups, led by the very same figures and populated by the same devotees, to prosper despite repeated warnings.

For example, when the Southern Poverty Law Center presented Facebook with examples of outwardly violent images and rhetoric targeting vulnerable communities 鈥 such as Muslims and immigrants 鈥 produced by both anonymous and known leaders in the hate movement, the company largely declined to take action.

The consequences of inaction were on full display when Facebook declined to ban the Proud Boys 鈥 a hate group most notable for its history as an early waypoint for extremists who later join overtly white supremacist organizations 鈥 in August. The company allowed the group to continue operating recruitment and new member vetting pages on the platform even after the Proud Boys led a violent protest in Portland last June that police ultimately declared a riot. When provided documentation of the Proud Boys鈥 use of the platform, Facebook ruled that the group did not violate their community standards, which include provisions against 鈥減otential real-world harm that may be related to content on Facebook.鈥 Despite open glorification of violence, as well as a multitude of videos documenting beatings by Proud Boys, Facebook allowed the group to remain on the site.

Two months later, nine members of the Proud Boys were sought by authorities in New York on charges of rioting, assault and attempted assault after a brawl outside the Metropolitan Republican Club, where the group鈥檚 leader at the time, Gavin McInnes, was speaking.

Twitter opted to suspend McInnes鈥 account a week after the New York incident under a policy 鈥減rohibiting violent extremist groups,鈥 the precise policy that Facebook did not find sufficient evidence to enforce after the Portland riot.

In August, Jack Dorsey, CEO and co-founder of Twitter, announced that the company would not ban the account of far-right conspiracy theorist Alex Jones, despite action being taken against Jones by other major companies including Apple, Spotify, Facebook and YouTube, because Jones hadn鈥檛 violated the platform鈥檚 rules. While defending his decision, Dorsey acknowledged Twitter鈥檚 shortcomings in communicating its policies and declared, 鈥淲e鈥檙e going to hold Jones to the same standard we hold to every account, not taking one-off actions to make us feel good in the short term, and adding fuel to new conspiracy theories.鈥

The next day, Dorsey called into right-wing media personality Sean Hannity鈥檚 radio show to assure listeners that Twitter wasn鈥檛 censoring conservative voices.

One month later, Twitter banned Jones for violating its policies on abusive behavior after Jones harassed a CNN journalist outside of a congressional hearing at which Dorsey was testifying.

This lack of consistency across platforms is glaring. Social media companies are typically the first to point out that each platform is unique and involves different sets of considerations when assessing policy violations. But this keeps the door open for hate groups to slide back onto platforms or avoid enforcement altogether.

Jones and Infowars still have a presence on Facebook, including a personal page, a public page for streaming content and a closed group with more than 100,000 members. While Facebook banned several of his most prominent pages, Jones was allowed to continue peddling his conspiracies on the site. And while Twitter banned Jones, as well as the Proud Boys and McInnes, former Ku Klux Klan leader David Duke and white nationalist Richard Spencer inexplicably remain on the platform. Given that both Duke and Spencer have received international travel bans from European countries with legal protections around inciting hatred, it鈥檚 clear that enforcement is inconsistent.

Similar scenes have repeatedly played out on a number of platforms this year.

Discord is a chat platform designed for video game communities and a persistent and indispensable tool for white supremacist organizing. In early October, a journalist from Slate presented the company with a list of servers designed for denigrating minorities, indoctrinating individuals into white supremacy and doxing perceived enemies. Discord declined to take meaningful action or comment on its decision. Its terms of service state, 鈥淸Discord] has no obligation to monitor these communication channels but it may do so in connection with providing the Service.鈥

An even stranger scenario played out when a support specialist at Steam, a video game distribution platform, rescinded a disciplinary action taken against Andrew Auernheimer, a well-known and violence-minded white supremacist affiliated with the neo-Nazi Daily Stormer. Auernheimer, who was openly displaying two lightning bolts in his username, a reference to the Schutzstaffel of Nazi Germany, successfully lobbied for his sanction to be reversed over a flimsy claim that the symbols were meant as support for Vice President Mike Pence.

Auernheimer鈥檚 victory immediately became propaganda for the Daily Stormer.

Meanwhile, platforms such as Stripe continue to process monthly subscription payments for organizations like The Right Stuff, one of the white supremacist movement鈥檚 most popular and effective propaganda hubs specializing in radio programming. PayPal has taken a more aggressive line, but continues to struggle to keep up with the agility of cash-starved extremists who rapidly establish and publicize new accounts to continue receiving financial support.

YouTube had problems this year after a CNN investigation found that the company was running ads for more than 300 major brands 鈥 including Under Armour, Hilton, Adidas and Hershey鈥檚 鈥 on channels peddling extremism.

It鈥檚 a Public Health Problem

In an interview with the聽Intelligence Report, political scientist P.W. Singer compared the colonization of conspiracy and extremist content on social media platforms to a public health issue that requires a 鈥渨hole-of-society effort鈥 to address.

But tech companies must do a lot more. Singer described the need for 鈥渃reating firebreaks to misinformation and spreads of attacks that target their customers鈥 as well as 鈥溾榙eplatforming鈥 proven 鈥榮uperspreaders鈥 of harassment.鈥

鈥淪uperspreaders are another parallel to public health, that in the spread of both disease and hate or disinformation, a small number of people have a massive impact,鈥 Singer said. 鈥淪adly, many of these toxic superspreaders are not just still active but rewarded with everything from more followers to media quotes to actual invitations to the White House.鈥

The Network Contagion Research Institute (NCRI) also embraces approaching online hate as a public health crisis. Organized last May, the group conducted a large-scale analysis of 4chan and Gab data and examined how extremist online communities succeeded in spreading hateful ideas and conspiracies to larger, mainstream platforms like Twitter and Reddit.

Like Singer, NCRI director Joel Finkelstein told The Washington Post, 鈥淵ou can鈥檛 fight the disease if you don鈥檛 know what it鈥檚 made of and how it spreads.鈥

Their research shows spikes in the use of antisemitic and racist terms around President Donald Trump鈥檚 inauguration in 2017 and then later that year around the violence at the 鈥淯nite the Right鈥 rally in Charlottesville, Virginia. The NCRI report describes a 鈥渨orrying trend of real-world action mirroring online rhetoric.鈥

Gab, which was founded as an alternative to Twitter and has become an accepting home base for much of the alt-right, responded to the study in a now-deleted tweet saying, 鈥渉ate is a normal part of the human experience. It is a strong dislike of another. Humans are tribal. This is natural. Get over it.鈥

On Saturday, Oct. 27, a 46-year-old white man, Robert Bowers, walked into a Pittsburgh synagogue armed with semiautomatic weapons. He has been charged with the murders of 11 congregants. He was an active Gab user, and his posts showed a deep engagement with white nationalist ideas and antisemitic conspiracy theories 鈥 including those surrounding Soros that led Sayoc to mail the philanthropist a pipe bomb in late October.

Bowers is far from the first killer to have left a trail of online radicalization on platforms large and small. The reactive pattern of tech companies arranging policy after real-world violence only to allow enforcement to lag as time passes remains woefully inadequate.

Tech companies need to understand and take measures that address the proliferation of hate online, both as a public health issue and as a societal emergency.