How reliable is disability-related information on social media? Exploring the prevalence of disability misinformation and assessing fact-checking tools.

Authors

  • Angela Sidhu Department of Information Sciences and Technology, George Mason University, Fairfax, VA
  • Ian Prazak Department of Information Sciences and Technology, George Mason University, Fairfax, VA
  • Leah Padovani Department of Information Sciences and Technology, George Mason University, Fairfax, VA
  • Mayuko Karakawa Department of Information Sciences and Technology, George Mason University, Fairfax, VA
  • Yool Lim Department of Information Sciences and Technology, George Mason University, Fairfax, VA
  • Julia Hsu Department of Information Sciences and Technology, George Mason University, Fairfax, VA
  • Myeong Lee Department of Information Sciences and Technology, George Mason University, Fairfax, VA

Abstract

Social media platforms such as YouTube and Facebook are key sources of disability information but are also prone to misinformation. Our study investigates how disability-related misinformation is shared and disseminated through social media. Specifically, we ask the questions: (1) how prevalent is misinformation regarding disabilities? (2) how does the reliability of fact-checking tools look in evaluating the truthfulness of claims? To answer these questions, we collected data from Facebook and YouTube. YouTube data was collected through YouTube APIs using 28 keywords, yielding 13,034 videos with transcripts. Facebook data was collected using EasyScraper from 20 public disability-related pages and groups identified in a large-scale survey. The data, initially consisting of 21,498 posts, was refined with ChatGPT-4o API to identify information both fact-checkable and relevant to disabilities, yielding 1,414 data points. We fact-checked 500 randomly sampled posts by manually cross-referencing reputable sources and using AI tools (Originality.ai and ChatGPT-4o). Manual checks identified 85% of posts as “correct” and 15% as “mixed”, “incorrect”, or “non-objective”. Originality.ai identified 53% as “correct” and 47% as “mixed”, “incorrect”, or “non-objective”. ChatGPT marked 56% as “correct” and 44% as “mixed”, “incorrect”, or “non-objective”. Misinformation had the highest frequency in advertisements, followed by personal anecdotes. AI tools produced different results than ground-truths and overestimated the prevalence of misinformation, highlighting the necessity to choose such tools carefully.

Published

2024-10-13

Issue

Section

College of Engineering and Computing: Department of Information Sciences and Technology