Twitter is Testing How its Misinformation Lebals can be More Obvious and More Direct
Twitter is rethinking how the labels it applies to misinformation look and work, its website integrity manager told Reuters in an interview, as the social media giant is striving to make these interventions more apparent and to reduce its response times.
Twitter’s Yoel Roth said that the company is exploring changes to the small blue notices that it attaches to certain false or misleading tweets, to make these signals more ‘open’ and be more ‘direct’ in providing information to users.
But he didn’t say if any new versions would be ready before the U.S. election in the next four weeks, a period that experts say could be full of false and misleading online content.
Roth said the new efforts on Twitter include testing a more visible reddish-magenta color and working out whether to flag users who consistently post false information.
Twitter started labeling manipulated or manufactured media in early 2020 It expanded its labels to include coronavirus misinformation and then misleading tweets about elections and civic processes.
Twitter says it has now labeled thousands of posts, although most attention has been paid to U.S. tweet labels. It’s President Donald Trump.
In September, Twitter announced that it would label or remove election-winning posts before the results were certified.
Roth said research undermining the concept that corrections could reinforce people’s belief in misinformation known as the ‘backfire effect’ has led Twitter to rethink how its marks could be more evident.
The risk is that the label “becomes a badge of honor” that users actively seek attention,
“Most things get going so quickly that if you wait 20 or 30 minutes, most of the spread has happened to someone with a wide audience,” Kate Starbird said.
An associate professor at Washington University examined Twitter’s labeling responses.
Roth said Twitter reduces the reach of all tweets labeled as misinformation by limiting their visibility and not recommending them in places like search results. The company refused to share any data on the effectiveness of these steps.
In August, researchers at the Election Integrity Partnership said that Twitter’s disabling retweets on a Trump tweet that violated its rules had a clear effect on its spread, but were “too little, too late.”
Facebook, which exempts policymakers from its fact-checking program and faces backlash for not acting on misleading Trump posts, has begun adding labels with voting information to all related posts.
This strategy has been criticized by researchers for not making a quick and obvious distinction between true and false.