from the same-shit,-new-day dept

Another day in which we get to explain how content moderation is impossible to do well at scale. On Wednesday, Twitter (and Facebook) chose to lock the Trump campaign’s account after it aired a dangerous and misleading clip from Fox News’ “Fox & Friends” in which the President falsely claimed that children are “almost immune” from COVID-19.

People can debate whether it was appropriate or not for Twitter (and Facebook) to make those content moderation decisions, but it seems perfectly defensible. Claiming that kids are “almost immune” is insane and dangerous. However, where things get sketchy on the content moderation front is that Twitter also then ended up freezing the accounts of journalists and activists who fact checked that “Fox & Friends” nonsense:

Or in the case of Bobby Lewis from Media Matters, Twitter suspended his account for simply mocking part of the Fox & Friends clip, noting that when a host asked the President to “say something to heal the racial divisions in America” Trump couldn’t do it and could only brag about himself:

Now, tons of people are reasonably pointing out that this is ridiculous, and arguing that Twitter is “bad” at content moderation. But, again, this all comes just a few weeks (has it been a few weeks? time has no meaning) since Facebook, Twitter, and YouTube all received tremendous criticism from people for not being fast enough in pulling down another nonsense video — one that Breitbart livestreamed of “doctors” spewing under nonsense about COVID-19 in front of the Supreme Court. Indeed, at least week’s Congressional anti-trust hearing, Rep. David Cicilline lit into Facebook for leaving that video up for five hours, allowing it to get 20 million views (meanwhile, multiple Republican representatives yelled at Zuckerberg for taking down the video).

So, if you have some politicians screaming about how any clip of disinformation about COVID-19 must be taken down, it’s no surprise that social media platforms are going to rush to take that content down — and the easiest way to do that is to take down any of the clips, even the clips that are people debunking, criticizing, or mocking the speech. Would it be nice if content moderation systems could figure out which one is which? Yes, absolutely it would. But doing so would mean taking extra time to understand context (which isn’t always so easy to understand), and in the process also allowing the videos that some say are dangerous by themselves to remain online.

In fact, if Twitter said to keep up the videos that are people fact checking or criticizing the videos, you create a new dilemma — in that those who want the dangerous nonsense to spread can, themselves, retweet the videos criticizing the content, and add their own commentary in support of the video. And then what should Twitter do?

Part of the issue here is that there are always these difficult trade-offs in making these decisions, and even if you think it’s an easy call, the reality is that it’s going to be more complex than you think.

Filed Under: como, content moderation, content moderation at scale, covid, donald trump, fact checking, impossible, journalism, reporting
Companies: twitter

Categories: Technology