Community Notes is a powerful feature of the platform that relies on consensus. In November, I may have let out a loud groan when I heard Elon Musk declare that his mission is to make Twitter the “most accurate source of news about the world”.
It is true that this man hasn’t always been devoted to the truth. Musk claimed in an infamous tweet that he was “funding secured” for Tesla to go private. This deal never happened. In 2020, Musk tweeted that Covid-19 was “essentially” immune to children.
Musk is excited by a new feature on the social media platform, Community Notes. This feature is changing the internet fact-checking game.
Community Notes allows users to add context below tweets of others if they believe they contain inaccurate or misleading information. The feature was originally piloted by Musk in the US back in January 2021. It was known as “Birdwatch”, but until November it was known under that name. Musk claimed it gave him the “creeps”.
You can think of the feature as a bit like a crowdsourced misinformation-fighting system: a contributor can add a note when they feel a tweet is false or needing additional context, and other contributors are then asked to vote on whether this note is “helpful” or not. The note will be displayed underneath the original tweet if enough people agree that it’s helpful . Other Twitter will not see it if it isn’t.
In several ways, Community Notes is different from fact-checking – which I think can be valuable, but has been too often used as a weapon – in the traditional sense.
The fact that the process is conducted by consensus, rather than one individual, reduces the possibility of bias or error. The feature is not called a “fact-check” but rather “readers added contextual information”. It’s a very important difference. This approach treats facts as complex, contested, and changing, instead of assuming that they are always uncontested, straightforward, and established. Community Notes is completely anonymous. This means that there’s no incentive for people to use it as a way to score points or virtue-signal, like other fact-checking websites.
It’s not perfect. Twitter says it is only when enough people “from different perspectives” rate a message that it receives the “helpful” status. Twitter says that it does not use information like gender or political affiliation to assess notes, but rather looks at the ratings of previous notes. It sounds reasonable, but it’s difficult to make sure that the contributors are diverse. How different are the different viewpoints, and where is the median?
How can we ensure that tweets are not all treated the same? It’s unclear. The majority of tweets I’ve seen get “Community Noted”, are from left-leaning people, like MSNBC host Mehdi hasan, who was “Community Noted”, last week after a claim that he made regarding rates of intraracial violence. Hasan replied to what I thought was fair and required additional context by saying that the Community Notes feature has “become another weapon of right on Musk’s twitter”.
It could be a self-fulfilling prophecy: If left-leaning people do not participate as much, Community Notes may become biased towards the right. Molly White is a Wikipedia editor who also joined the Birdwatch system when it was called Birdwatch. She describes herself as “leftist” and says that she has stopped contributing to Community Notes since Musk took over. She says, “I never felt the need to provide free labour to Twitter under previous ownership. But I don’t feel that way now.”
Twitter, however imperfect, is a good example of a way to provide context and correct misinformation on the web. The company even took the radical step of to allow adverts be Community Noted.
Donald Trump has 5/2 odds to win the US presidential election next year. It is therefore more important than ever that you can quickly correct any false or misleading information online before it spreads. Musk may not always practice what he preaches. It doesn’t necessarily mean that his message is wrong.