- Main
Machine Credibility: How News Readers Evaluate Ai-generated Content
Abstract
The advent of AI-generated news as a novel form of content demands renewed attention toward modes of understanding reader perceptions. This research sought to answer: What evaluative criteria do readers use to perceive automated news content? To answer this, the study employed a two-phase survey methodology designed to elicit reader perceptions of AI-generated news. Phase 1 yielded 26 dynamic descriptor words and reflected broad social perceptions of AI. In Phase 2, a series of exploratory factor analyses (EFA) was conducted on results of a survey using the 26 items obtained in phase 1 to uncover underlying factors contributing to differences in how readers ranked articles based on the aforementioned descriptor words. In both phases, readers were informed at the beginning of the survey that the articles were generated using AI. The first set of exploratory factor analysis results were obtained using varimax rotation, which revealed five salient factors underlying the 26 descriptors labeled Quality, Engagement, Alienation, Effort, and Coherence. The second exploratory factor analysis used oblimin rotation, which contrastingly revealed nine salient factors, which were labeled Credibility, Prolixity, Engagement, Substance, Clarity, Alienation, Complexity, Effort, and Neutrality. When compared with the results of factor analyses for human-generated news content, the findings offer new constellations of terms that reflect the dimensions that readers attend to in articles attributed to artificial intelligence.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-