How Do Language Intensity and Artificial Intelligence (AI) Affect Perceptions of Fact-checking Messages and Evaluations of Fact-checking Agencies?
- Xue, Haoning
- Advisor(s): Zhang, Jingwen
Fact-checking agencies are essential to correct misinformation and inform the public, while how people evaluate these agencies and their messages remain unclear (Brandtzaeg et al., 2017). Two factors about the messages and the sources – two essential factors in the theories of persuasion – were examined: language intensity of fact-checking labels and AI as a fact-checking agency. Language intensity, a linguistic feature that reflects message specificity and emotionality, may implicitly influence the acceptance of misinformation corrections and behavior intentions (Bowers, 1963). While AI has the potential to automate the fact-checking process and improve the acceptance of misinformation corrections as an unbiased automated decision maker, the social acceptance of AI in fact checking is unclear. This study investigated how language intensity and fact-checking agency (human vs. AI) influence the evaluations of fact-checking messages and agencies with an observational study of fact-checking messages on social media (N = 33755) and two online experiments (combined N = 1449) in the U.S. Both the observational study and the experiments showed that fact-checking messages with high language intensity would elicit low message credibility, while this effect diminished when the messages were counter-attitudinal in the experiments. Besides, participants perceived AI fact-checking agencies the same as human agencies. Individual differences in conspiracy ideation, political ideology and demographics significantly affected message credibility and engagement intentions as well. These findings suggest that language nuances such as language intensity in fact-checking messages affected message perception and the acceptance of misinformation corrections. Theoretical and practical implications were discussed in detail.