Skip to main content
eScholarship
Open Access Publications from the University of California

Improving the Readability of Scientific Concept Analogies with Cognitive Conflict Reinforcement Learning

Abstract

Large language models are increasingly being used for education and science communication by automatically generating explanations of scientific concepts. However, prior research has found that the analogies produced by LLMs lack human-like psycholinguistic properties important for readability. In this work, we propose cognitive conflict reinforcement learning (CCRL) to improve the psycholinguistic properties of analogies generated by LLMs. Specifically, we create cognitive conflict between the original LLM and a cloned LLM during reinforcement learning. This helps address the cognitive rigidity problem in LLMs. Experimental results demonstrate that our approach significantly outperforms existing RL algorithms and human performance in improving various readability metrics of generated analogies.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View