Updated: Feb 10
The special committee took it easy on the social media giants. That was a mistake.
Cartoon by Bruce Plante/Cagle Cartoons.
I WAS AWAY when the January 6 Committee finished its work. I am catching up with its report now and wanted to share some thoughts. You can read the full report and all published supporting materials here.
The final report is expansive, 800 pages long, and loaded with remarkably depressing facts, testimony, and findings. It calls for all sorts of things, including the prosecution of Donald Trump for his very unique and specific role in inciting the mob.
There may well be real-world consequences for the former president (I’ll believe it when I see it), but there will not be much in the way of consequences for the platforms that allowed themselves to be accessories to the madness of that day.
From The Washington Post’s Cat Zakrzewski, Cristiano Lima and Drew Harwell:
The Jan. 6 committee spent months gathering stunning new details on how social media companies failed to address the online extremism and calls for violence that preceded the Capitol riot. […] But in the end, committee leaders declined to delve into those topics in detail in their final report, reluctant to dig into the roots of domestic extremism taking hold in the Republican Party beyond former president Donald Trump and concerned about the risks of a public battle with powerful tech companies, according to three people familiar with the matter who spoke on the condition of anonymity to discuss the panel’s sensitive deliberations.
What an odd, and surprisingly cowardly choice. The draft section dealing with social media is available online, and I highly recommend that you take an evening and read it.
To leave this out of the final report was a mistake. The draft includes specific sections on the social media giants — Twitter, Facebook, YouTube, Reddit, and TikTok — and also delves into right-wing fringe platforms, and lesser-known-to-the-political-crowd platforms like Discord and Twitch.
Reading the report, it’s clear that committee investigators were very much aware of who was using social media to organize the riot (and to generally promote political violence), what their goals were, and the role of Trump in all of it.
What’s also clear is that the mainstream platforms like Twitter, Facebook, and YouTube, have pretty sophisticated processes and policies in place to at least attempt to manage such things. This, to me, is the most frustrating aspect of the whole thing.
Here’s an example from the Twitter section of the report:
Twitter’s leadership and the Safety Policy team never aligned on how to handle the risk that post-election violence would be incited on the service, and the Safety Policy team complained that leadership was “confused” about the policy’s “origin, urgency, and ultimate purpose.”
It gets worse:
The Safety Policy team was not the only source of warnings to Twitter’s leadership. The weekend before the attack, a representative from the Georgia-based civil rights advocacy nonprofit Fair Fight reported several violent tweets to Twitter targeting the Georgia special election, which ultimately decided control of the US Senate. Among these were tweets from Overstock.com CEO Patrick Byrne, who threatened to “lynch” an election official and claimed to have paid an operative to break into a voting facility to retrieve “samples.” These tweets led to individual threats to the physical safety of a specific, named individual on pro-Trump message boards. Another was from the prominent white nationalist Nick Fuentes, who said during a livestream that Georgians had “no other recourse” than to kill state legislators. Yet another came from Project Veritas, which named a specific advocacy center, resulting in several of the center’s employees receiving death threats and being doxed by far-right activists. Amazingly, Twitter’s initial response to Fair Fight said that many of these tweets did not violate its policy against violent threats or were only eligible to be labeled, not removed. Those that were eventually removed remained on the platform until after the Georgia election, on the morning of January 6th.
That’s the same Nick Fuentes who was recently reinstated on Twitter by Elon Musk. This man is a neo-Nazi.
I’ll spare you more quotes from the other sections, but it’s much of the same. There were content moderation policies abound — and the interpretations of them by internal company actors are all over the map.
The only conclusion I can come to is that content moderation policies at these companies are largely useless generally, and extremely useless in the face of extraordinary events (of course, things have gotten a lot worse at Twitter already). If anything, they are simply tools used by these companies to obfuscate their roles in modern communications. In the end, who cares about broad content moderation policies if those policies don’t apply to the very things they aspire to protect against? Why would a president even be considered to be subject to a platform-spanning set of policies? It’s the President of the United States of America, which is a truly unique case in every sense.
They can crow about free speech and the first amendment, but these companies have nothing to do with either one of those things. Their platforms were used to organize a seminal, tragic event in U.S. history — and they are still being used, right now, to further promote the ideology behind it. It’s great that Facebook has a robust team meant to stem the tide of awfulness on the platform, but what are those teams and policies really doing to address that awfulness? Not much, it seems. And they really (really) failed when it counted most.
“The content doesn’t go against our guidelines” has become the “thoughts and prayers” of the social media age. In this case, the select committee charged with giving us a full accounting of the events of January 6 and the mountains of context behind it, has short-changed the public. It’s a unique failure for a committee that truly did some great and important work, and it’s unacceptable.
Before 2018, Twitter had never spent more than $1 million on lobbying in Washington, and is still what I would consider “small” in terms of lobbying presence. That is decidedly not the case with Meta (Facebook) and Alphabet (YouTube). Meta has spent roughly $20 million per year in DC over the last three years, and Alphabet is consistently spending over $10 million. Notably, Alphabet’s lobbying expenditures are actually down over the last four years.
That’s a lot of money. Given their reach and influence in society, those dollar amounts are actually just a piece of the larger web of influence these companies have in our politics. In this case, the companies to which we have willingly outsourced the public square have come out looking ineffective and probably even culpable in some ways…but that part has been left out of the record.