Why Won’t Twitter Treat White Supremacy Like ISIS? Because It Would Mean Banning Some Republican Politicians Too

A Twitter employee who works on machine learning believes that a proactive, algorithmic solution to white supremacy would also catch Republican politicians. At a Twitter all-hands meeting on March 22, an employee asked a blunt question: Twitter has largely eradicated Islamic State propaganda off its platform. Why can’t it do the same for white supremacist content? An executive responded by explaining that Twitter follows the law, and a technical employee who works on machine learning and artificial intelligence issues went up to the mic to add some context. (As Motherboard has previously reported, algorithms are the next great hope for platforms trying to moderate the posts of their hundreds of millions, or billions, of users.) With every sort of content filter, there is a tradeoff, he explained. When a platform aggressively enforces against ISIS content, for instance, it can also flag innocent accounts as well, such as Arabic language broadcasters. Society, in general, accepts the benefit of banning ISIS for inconveniencing some others, he said. In separate discussions verified by Motherboard, that employee said Twitter hasn’t taken the same aggressive approach to white supremacist content because the collateral accounts that are impacted can, in some instances, be Republican politicians. The employee argued that, on a technical level, content from Republican politicians could get swept up by algorithms aggressively removing white supremacist material. Banning politicians wouldn’t be accepted by society as a trade-off for flagging all of the white supremacist propaganda, he argued.
There is no indication that this position is an official policy of Twitter, and the company told Motherboard that this “is not [an] accurate characterization of our policies or enforcement—on any level.” But the Twitter employee’s comments highlight the sometimes overlooked debate within the moderation of tech platforms: are moderation issues purely technical and algorithmic, or do societal norms play a greater role than some may acknowledge? Though Twitter has rules against “abuse and hateful conduct,” civil rights experts, government organizations, and Twitter users say the platform hasn’t done enough to curb white supremacy and neo-Nazis on the platform, and its competitor Facebook recently explicitly banned white nationalism. Wednesday, during a parliamentary committee hearing on social media content moderation, UK MP Yvette Cooper asked Twitter why it hasn’t yet banned former KKK leader David Duke, and “Jack, ban the Nazis” has become a common reply to many of Twitter CEO Jack Dorsey’s tweets. During a recent interview with TED that allowed the public to tweet in questions, the feed was overtaken by people asking Dorsey why the platform hadn’t banned Nazis. Dorsey said “we have policies around violent extremist groups,” but did not give a straightforward answer to the question. Dorsey did not respond to two requests for comment sent via Twitter DM.

via motherboard: Why Won’t Twitter Treat White Supremacy Like ISIS? Because It Would Mean Banning Some Republican Politicians Too

Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *