Skip to main content
eScholarship
Open Access Publications from the University of California

ChatGPT and the Illusion of Explanatory Depth

Abstract

The recent surge in the use of AI-powered chatbots such as ChatGPT has led to new challenges in academia. These chatbots can enable student plagiarism and the submission of misleading content, undermining educational objectives. With plagiarism detectors unreliable in the face of this issue, educational institutions have been struggling to update their policies apace. This study assesses the effectiveness of sending warning messages - a common strategy used to discourage unethical use of ChatGPT - and investigates the use of the illusion of explanatory depth (IOED) paradigm as an alternative intervention. An international sample of students was asked to rate their understanding of, likelihood to use, and moral stance toward ChatGPT-generated text in assignments both before and after either reading a cautionary university message or explaining how ChatGPT works. Results showed that the explanation task did lead to the expected reduction in ratings of understanding, but despite this, neither moral acceptability nor likelihood to use decreased along with it. Similarly, reading the cautionary message neither resulted in a change in likelihood to use nor in moral acceptability, although it unexpectedly increased ratings of understanding. The results suggest that tackling students' understanding of ChatGPT is insufficient when it comes to deterring its unethical use, and that future interventions might want to have students reflect on moral issues surrounding the use of AI-powered chatbots.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View