Big Tech's censor machines keep malfunctioning
Matt Tabbi: ... Since the beginning of the “content moderation” movement, a major problem has become apparent. Human beings simply create too much content on Twitter, Facebook, YouTube, and Instagram for other human beings to review. Machines have proven able to identify clearly inappropriate content like child pornography (though even there the algorithms occasionally stumbled, as in the case of Facebook’s removal of the famous “Running Girl” photo ). But asking computer programs to sort out the subtleties of different types of speech — differences between commentary and advocacy, criticism and incitement, reporting and participation — has proven a disaster. A theme running through nearly all of the “Meet the Censored” articles is this problem of algorithmic censorship systematically throwing out babies with bathwater. Whether it’s YouTube cracking down on videographer Ford Fischer for covering events involving Holocaust deniers or white supremacists, the same platform zapping f...