In September, Twitter announced that it would label or remove election-winning posts before the results were certified.
Roth said research undermining the concept that corrections could reinforce people’s belief in misinformation known as the ‘backfire effect’ has led Twitter to rethink how its marks could be more evident. The risk is that the label “becomes a badge of honor” that users actively seek attention,
“Most things get going so quickly that if you wait 20 or 30 minutes, most of the spread has happened to someone with a wide audience,” Kate Starbird said. An associate professor at Washington University who examined Twitter’s labelling responses.
Roth said Twitter reduces the reach of all tweets labeled as misinformation by limiting their visibility and not recommending them in places like search results. The company refused to share any data on the effectiveness of these steps.
In August, researchers at the Election Integrity Partnership said that Twitter’s disabling retweets on a Trump tweet that violated its rules had a clear effect on its spread, but were “too little, too late.”
Facebook, which exempts policymakers from its fact-checking program and faced backlash for not acting on misleading Trump posts, has begun adding labels with voting information to all related posts. This strategy has been criticized by researchers for not making a quick and obvious distinction between true and false.