Empowering social media users to assess content helps fight misinformation
When fighting the spread of misinformation, social media platforms typically place most users in the passenger seat. Platforms often use machine-learning algorithms or human fact-checkers to flag false or misinforming content for users.
Just because this is the status quo doesnt mean it is the correct way or the only way to do it, says Farnaz Jahanbakhsh, a graduate student in MITs Computer Science and Artificial Intelligence Laboratory (CSAIL).
She and her collaborators conducted a study in which they put that power into the hands of social media users instead.
"This work shows that a decentralized approach to moderation can lead to higher content reliability on socialmedia. This approach is also more efficient and scalable than centralized moderation schemes, and may appeal to users who mistrust platforms.
A lot of research into misinformation assumes that users cant decide what is true and what is not, and so we have to help them. We didnt see that at all."
https://news.mit.edu/2022/social-media-users-assess-content-1116