The rise of deepfake technology—AI-generated videos that convincingly depict fabricated events—poses growing challenges for digital media discernment, public trust, and memory. While technical detection methods have advanced, little empirical research has analyzed how deepfakes influence human cognition and memory. This dissertation empirically assesses whether deepfake videos can implant false memories in Generation Z adults and whether active debriefing or media literacy interventions mitigate these effects.
Drawing from literature in cognitive science, false memory, and misinformation studies, this research employs an experimental design involving over 1,600 participants aged 18–26. Participants were exposed to a combination of real videos and deepfake videos featuring celebrity figures. Later, they were assessed for memory retention, deepfake detection, and confidence in discernment. Results show that deepfakes can produce false memories, especially when participants are unaware that they are viewing falsified videos. However, active debriefing (informing participants post-exposure about the presence of deepfakes) significantly reduced the retention of these false memories.
Contrary to expectations, prior knowledge of the featured celebrities, familiarity with deepfake technology, and frequency of social media use did not improve detection accuracy or false memory resilience. Participants frequently relied on intuitive audiovisual cues (e.g., lip-sync mismatches and unnatural eye movements) over factual reasoning, and often expressed high confidence in inaccurate judgments. Emotional responses to deepfakes were also pronounced. Many participants reported anxiety, mistrust, and moral concern about the implications of deepfake technology and the irresponsible use of generative AI tools.