Yesterday, in a (deliberately, I assume) well-timed release, the Tech Transparency Project released a report entitled Facebook's Militia Mess, detailing how there are tons of "militia groups" organizing on the platform (first found via a report on Buzzfeed). You may recall that, just days after the insurrection at the Capitol, that Facebook's COO Sheryl Sandberg made the extremely disingenuous claim that only Facebook had the smarts to stop these groups, and that most of the organizing of the Capitol insurrection must have happened elsewhere. Multiple reports debunked that claim, and this new one takes it even further, showing that these groups are (1) still organizing on Facebook, and (2) Facebook's recommendation algorithm is still pushing people to them:
TTP identified 201 Facebook militia pages and 13 groups that were active on the platform as of March 18. These included “DFW Beacon Unit” in Dallas-Fort Worth, Texas, which describes itself as a “legitimate militia” and posted March 21 about a training session; “Central Kentucky Freedom Fighters,” whose Facebook page posts near-daily content about government infringing on people’s rights; and the "New River Militia" in North Carolina, which posted about the need to “wake up the other lions” two days after the Capitol riot.
Strikingly, about 70% (140) of the Facebook pages identified by TTP had “militia” in their name. That’s a hard-to-miss affiliation, especially for a company that says its artificial intelligence systems are successfully detecting and removing policy violations like hate speech and terrorist content.
In addition, the TTP investigation found 31 militia-related profiles, which display their militia sympathies through their names, logos, patches, posts, or recruiting efforts. In more than half the cases (20), the profiles had the word “militia” in their name.
And, this stuff certainly doesn't look great:
Facebook is not just missing militia content. It’s also, in some cases, creating it.
About 17 percent of the militia pages identified by TTP (34) were actually auto-generated by Facebook, most of them with the word “militia” in their names. This has been a recurring issue with Facebook. A TTP investigation in May 2020 found that Facebook had auto-generated business pages for white supremacist groups.
Auto-generated pages are not managed by an administrator, but they can still play a role in amplifying extremist views. For example, if a Facebook user “likes” one of these pages, the page gets added to the “about” section of the user’s profile, giving it more visibility. This can also serve as a signal to potential recruiters about pro-militia sympathies.
Meanwhile, Facebook’s recommendation algorithm is pushing users who “like” militia pages toward other militia content.
When TTP “liked” the page for “Wo Co Militia,” Facebook recommended a page called “Arkansas Intelligent citizen,” which features a large Three Percenter logo as the page header. (The “history” section in the page transparency shows that it was previously named “3%ERS – Arkansas.”)
Of course, this certainly appears to be a strong contrast with what Facebook itself is claiming. In Mark Zuckerberg's testimony today before Congress on dealing with disinformation, he again suggests that Facebook has an "industry-leading" approach to dealing with this kind of content:
We remove Groups that represent QAnon, even if they contain no violent content. And we do not allow militarized social movements—such as militias or groups that support and organize violent acts amid protests—to have a presence on our platform. In addition, last year we temporarily stopped recommending US civic or political Groups, and earlier this year we announced that policy would be kept in place and expanded globally. We’ve instituted a recommendation waiting period for new Groups so that our systems can monitor the quality of the content in the Group before determining whether the Group should be recommended to people. And we limit the number of Group invites a person can send in a single day, which can help reduce the spread of harmful content from violating Groups.
We also take action to prevent people who repeatedly violate our Community Standards from creating new Groups. Our recidivism policy stops the administrators of a previously removed Group from creating another Group similar to the one removed, and an administrator or moderator who has had Groups taken down for policy violations cannot create any new Groups for a period of time. Posts from members who have violated any Community Standards in a Group must be approved by an administrator or moderator for 30 days following the violation. If administrators or moderators repeatedly approve posts that violate our Community Standards, we’ll remove the Group.
Our enforcement effort in Groups demonstrates our commitment to keeping content that violates these policies off the platform. In September, we shared that over the previous year we removed about 1.5 million pieces of content in Groups for violating our policies on organized hate, 91 percent of which we found proactively. We also removed about 12 million pieces of content in Groups for violating our policies on hate speech, 87 percent of which we found proactively. When it comes to Groups themselves, we will remove an entire Group if it repeatedly breaks our rules or if it was set up with the intent to violate our standards. We took down more than one million Groups for violating our policies in that same time period.
So, on the one hand, you have a report finding these kinds of groups still on the site, despite apparently being banned. And, on the other hand, you have Facebook talking about all of the proactive measures it's taken to deal with these groups. Both of them are telling the truth, but this highlights the impossibility of doing things well.
First, note the scale of the issue. Zuckerberg notes that Facebook has removed more than one million groups. The TTP found 13 militia groups, and 201 militia pages. At the kind of scale of Facebook some things that should be removed are always going to slip through. Some might argue that if the TTP could find these pages, then clearly Facebook could as well. But that raises two separate issues. First, what exactly are they looking for. There are so many things that could violate policies, that I'm sure Facebook trust & safety folks are constantly doing searches like these -- but just because they don't do the exact same search as the TTP does, it doesn't mean that they're not looking for this stuff. Indeed, one could argue that finding just 13 such groups is pretty good.
On top of that, what exactly is the policy violation? Facebook says that it bans militia groups "that support and organize violent acts amid protests." But that doesn't mean every group that refers to itself as a "militia" is going to violate those policies. You can easily see how many might not. On top of that, assuming that these groups recognize how Facebook has been cracking down, it's quite likely that many will simply try to "hide" behind other language to make it more difficult for Facebook to find (indeed, the TTP report points to one example of a "militia" group saying it needs to change the name of the group. In fact, in that example, it says that local law enforcement was who suggested changing the name:
So, there's always going to be some element of cat-and-mouse on these kinds of things, and some level of subjectivity in determining whether a group is actually violating Facebook's policies or not. It's easy to play a "gotcha" game and find groups like this, but that's because at scale it's always going to be impossible to be correct 100% of the time. Indeed, it's also quite likely that these efforts likely over-blocked in some cases, and took down groups that it should not have. Any effort at content moderation, especially at scale, is going to run into both Type I (false positive) and Type II (false negative) mistakes. Finding and highlighting just a few of those mistakes doesn't mean that the company is failing overall -- though it may provide some suggestions on how and where the company can improve.