Year in technology 2019
Even as research documented a link between online speech and offline violence, internet companies struggled in 2019 to prioritize public safety over the freedom of their users to post extremist content.
At the same time, they dove ever more deeply into questions about whether politicians should have greater leeway than others to promote abusive鈥攅ven racist鈥攍anguage and the same kind of demonizing falsehoods and memes often disseminated by far-right extremists.
Facebook, YouTube, Twitter and Google all announced new policies involving political content during a year that saw President Trump escalate his attacks on the industry, accusing social media companies of censoring conservative voices and threatening to regulate them in retaliation.
Trump鈥檚 threats came as he began to ramp up a re-election campaign that will undoubtedly feature a heavy dose of social media messages that vilify his opponents and rally a largely white base of support. And they came after several years in which the industry has attempted to curtail the use of their services by white nationalists and other extremists.
The strain that Trump鈥檚 own rhetoric put on the tech companies did not stop executives like Facebook鈥檚 Mark Zuckerberg and Twitter鈥檚 Jack Dorsey from meeting privately with him.
In June 2019, in response to mounting criticism, Twitter announced that politicians鈥 tweets containing threats or abusive language could be slapped with warning labels that would require users to click before seeing the content. But the offensive tweets won鈥檛 be removed from the site under the policy, as might those of a normal user. This policy shift, based on the idea that political speech is always a matter of public interest, effectively protects the speech of society鈥檚 most powerful figures, no matter whether it otherwise violates Twitter鈥檚 rules against abusive language. The policy applies to all government officials, politicians and similar public figures who have more than 100,000 followers. Twitter also said, however, that it would not use its algorithm to promote such tweets.
It wasn鈥檛 long before a Trump tweet tested the new policy. In July, Twitter said that the president鈥檚 tweet telling U.S. Reps. Alexandria Ocasio-Cortez, Ilhan Omar, Ayanna Pressley, and Rashida Tlaib鈥攁ll women of color鈥攖o 鈥済o back鈥 to their countries did not violate its rules against racism. The tweet did not, in fact, get a warning label.
It took a November tweet by Omar鈥檚 Republican challenger, Danielle Stella 鈥攕uggesting that Omar 鈥渟hould be hanged鈥濃攆or Twitter to take meaningful enforcement action against a political candidate. Twitter said Stella鈥檚 account was permanently disabled.
Like Twitter, YouTube struggled to draw the line between public interest and public harm. The video giant announced in September that politicians would be exempt from some of its content moderation rules.
Facebook took a similar tack. Nick Clegg, its vice president of global affairs and communications, announced that it would exempt politicians from its third-party fact-checking program, which it uses to reduce the spread of false news and other forms of viral misinformation. In short, it means Facebook has decided to allow politicians to lie on its platform in their advertisements and other forms of political speech.
In terms of advertisements, Twitter decided to ban campaign ads entirely, while Google opted to severely limit the ability of campaigns to target certain groups of people, a process known as 鈥渕icrotargeting.鈥
Research Links Online Speech to Offline Violence
The new policies came amid mounting evidence linking online speech to offline violence.
A study released by New York University researchers in June found a correlation between certain racist tweets and hate crimes in 100 U.S. cities. The research examined 532 million tweets across U.S. cities of varying geographies and populations and found that areas with more targeted, discriminatory speech had higher numbers of hate crimes.
Change the Terms, a coalition of more than 50 civil rights organizations, of which the 人兽性交 is a founding member, is advocating for tech companies to adopt model policies that effectively combat hate and extremism. In September, the coalition convened a town hall in Atlanta that featured top leaders from Facebook, including chief operating officer Sheryl Sandberg.
鈥淧eople in our communities are dying at the hands of white supremacy鈥攖he stakes are that high,鈥 Jessica Gonzalez, vice president of Strategy at Free Press, a member of Change the Terms, told Sandberg and other attendees. 鈥淭he safety of users must be a priority on the platform.鈥
Social Media Platforms Function as Vectors for Hate
Social media platforms proved to be a vector for the spread of white supremacist ideologies both during and after acts of domestic terrorism in 2019.
On March 15, an extremist attacked two mosques in Christchurch, New Zealand, killing 51 and injuring another 49. The perpetrator broadcast the attack on Facebook Live, and both the video of the attack and a 74-page manifesto went viral in the immediate aftermath. Facebook reported that 1.5 million copies of the video were uploaded in the first 24 hours, with 1.2 million of those being blocked by Facebook to prevent viewing.
Six weeks later, a 19-year-old man in Poway, California, attacked the Chabad of Poway on the last day of Passover, killing one and injuring three. The perpetrator posted a manifesto to the imageboard 8chan鈥攏otorious for its community of far-right extremists鈥攊n the moments before the attack. The document included a reference to Facebook, although the stream appears to have failed.
After the attacks, Facebook announced tighter restrictions on its Facebook Live platform, including temporary and permanent bans. It鈥檚 unclear, though, whether these new policies would have prevented the viral spread of videos showing the attacks.