One month before the U.S. election and Twitter is considering ways it can be more effective with how it handles misinformation on its site and how it alerts users.
The head of site integrity told Reuters this week Twitter is exploring changes to the small blue notices that it attaches to certain false or misleading tweets, to make the labels more visible and be more clear in the what and the why a post is flagged. Twitter is also mulling the idea of flagging users who consistently post false information....so readers are warned before they even start reading a post.
It is not clear, however, that any changes will be ready before the U.S. election. Experts warn this period could be flooded with false and misleading content.
Twitter has repeatedly bumped heads with President Trump over his unfounded claims of rampant mail-in voter fraud, and in September Twitter announced it would label or remove posts claiming election victory before results were certified.
But Conservatives have complained social media platforms are stifling their right to free speech by unfairly targeting them and flagging their posts for misinformation more often than posts made by Democrats.
Facebook, which exempts politicians from its fact-checking program, has also started adding labels with voting information on its platform. But critics claim that is not enough since the labels don't distinguish between which posts are true or false.